Northwestern AI Professor on the Future of Entrepreneurship | EP 34

Description

In this episode of UNinvested, Sahil Seth sits down with David Zaretsky, a serial entrepreneur and Northwestern University professor specializing in AI, computer science, and electrical engineering. David shares his journey from academia to founding multiple successful startups, offering insights into the intersection of technology, education, and entrepreneurship. The conversation touches on the evolving landscape of AI, the importance of failure in entrepreneurial success, and the skills young entrepreneurs should prioritize in a rapidly changing world.

What we explore:

  • David Zaretsky's transition from academia to entrepreneurship

  • The role of AI in reshaping industries and its impact on productivity

  • Common mistakes young entrepreneurs make and the value of learning from failure

  • The debate around the necessity of learning to code in an AI-driven future

  • The significance of critical thinking and problem-solving skills over technical coding

  • Differences between open-source and private AI models, and their implications

  • Ethical concerns surrounding AI, such as deep fakes and security risks

Where to find Uninvested:

In this episode, we cover:

[00:00] Introduction to the episode and guest, David Zaretsky

[00:14] David's academic background and transition into the startup world

[01:09] The pivotal moment that drove David towards entrepreneurship

[02:55] The mindset and maturity needed to succeed in a PhD and in startups

[03:53] Challenges and common pitfalls for young entrepreneurs

[05:27] Balancing optimism and realism in the face of AI's capabilities

[09:42] How AI is transforming productivity and the nature of work

[12:02] The ongoing relevance of coding skills in an AI-driven world

[14:44] Building startups that go beyond simple AI applications

[18:39] Open-source versus private AI models and the need for diversity in development

[22:52] The ethical implications of AI and how to address security risks

[24:24] Maintaining a balanced life as an entrepreneur and the importance of human connection

[00:00:00] Sahil: Want to build a tech startup? What do you do?

[00:00:02] David: Entering the fifth industrial revolution.

[00:00:03] Sahil: What do you think outside of the tech has changed?

[00:00:06] David: Every industry is going to radically evolve.

[00:00:09] Sahil: Should they still learn how to code?

[00:00:11] David: We do need to be fearful of what AI could do when it's

[00:00:14] Sahil: David Zaretsky is a serial entrepreneur, founding multiple companies over his career, while serving as professor at Northwestern University, specializing in AI, computer science, and electrical engineering.

He's taught hundreds of students, including myself. About the best ways to leverage AI while building a startup. It's my pleasure to sit down with him once again.

So professor, you obviously began your career at Northwestern. You went there as an undergrad, you got your master's there, eventually your PhD. So how do you tra how do you transition from the world of academia to the startup world, which the two are very different in my opinion.

[00:00:48] David: Started out doing research really early on as an undergrad and that led into my master's research work as we spun out a company called Accelchip which Was later acquired and then went back to do my PhD.

And then a few years later, I had commercialized my PhD research. They'd spot two companies from Northwestern and mostly in the semiconductor tool space around 2004. So I was working for, there's a company called Xilinx. So going to their headquarters and looking down at this hall of just cubicle after cubicle.

And it just, it looked like a rat maze. And I was just flabbergasted that it literally looked like little mice inside a rat maze and I'm like that is not what I want to be doing. And, at that point I had made a decision, I, yeah, I'm not interested in working at, these big companies.

I'd rather be, a big fish in a big pond, in a little pond than a little fish in a big pond. And and so that really set me towards more of an entrepreneurial, journey and co founding several companies. When you went back

for your PhD, was your goal to do some research and eventually commercialize it?

Or do you go back just for the love of, learning or wanting to gain some expertise

in something else? I look at a PhD as not just a, just doing research. It's a mental state of, it's a discipline of how to approach problem solving. Look at a problem, research the problem deconstruct it, understand how to, find others or common, areas where we can apply different thought.

It gives you a level of maturity, be your own sort of entrepreneur and do the work on your own. And it's really up to you to come back and solve the problems on your own. It's that level of maturity I think that you have to develop.

[00:02:35] Sahil: You speak of maturity in entrepreneurship, but you teach a lot of undergrads that.

Are going through the entrepreneurial path. I was one of your students What do you think is the most common mistake that young entrepreneurs that you teach are making today?

[00:02:49] David: As much as I love teaching entrepreneurship you really can't teach entrepreneurship. It's to sit there, you can read a book, you can go to a classroom and you'll learn like the recipes, but the greatest lessons you could learn as an entrepreneur is failure.

You have not learned how to be an entrepreneur until you really learn how to fail and fail fast. There's no. A direct line from point A to point B. It's lots of waves, lots of ups and downs, lots of hurdles, lots of sleepless nights. And and you have to be prepared for that and no, you can't learn that in a classroom.

And so you have to be, cognizant of whether or not you want to experience, go through that experience, right? And I think what's most important in my classes is giving them the tools so that they're prepared. You always have a plan B and a plan C in your back pocket, so that you can, When these things arise, you're prepared for them, and that, you're still going to go through these ups and downs, but at least you can ride the waves and you can better prepare yourself for the inevitable.

[00:03:52] Sahil: Have there been any ideas that your students have had that are like, this is a great idea, it can't fail, and then end of the day, it did fail?

[00:03:59] David: Oh, there's lots of those, and there's lots on the other side. Give you an example. There was in this, My last class, there was a team who was working on an AI as a, psychology.

There are long lines to get into a therapist. Many people are struggling with anxiety and other problems and certainly students. And, there's lots of things going on around the world. The thought was we can create an AI that can respond to anxiety and people with immediate needs and so on.

And I just thought, it's a great idea, but it's laughable because AI can't have that emotion. It doesn't make sense for an AI to take the place of a psychologist. There is no, you can give it every book in the world and it still won't exhume that emotion, that sensitivity. Not six months later.

NVIDIA and OpenAI are starting to come out with all new platforms and GPT 4. 0. And you're listening to the emotion built in to this, the newer AIs and the LLMs, I sat back and I'm like, Oh my God, like that was the missing piece. You can now create or synthesize. Emotion. And I don't, now I go back and I think to myself, you know what?

Probably in another couple of years, this thing will be advanced enough. I didn't think it could get that advanced that quickly to create this, that level of sort of an emotional response when you're listening to it, all my students, you gotta think outside the box, like really far outside the box, and some people go really outside the box.

And I'm like, Whoa, you got to take a step back there. Maybe it's a little too far, maybe I just wasn't thinking far enough.

[00:05:36] Sahil: Do you think AI, because you need such a new train of thought, such a raw train of thought has shifted the balance of power to young entrepreneurs, almost having an advantage with little experience, or do you still think having that entrepreneurial backing, maybe being a multi time founder.

Still plays more to your advantage when leveraging a new technology.

[00:05:54] David: In a sense of a new entrepreneur who is like really wide eyed and eager and thinks that they can conquer the world versus somebody who's done it several times and they might be seasoned at it, but, and they might know how to navigate the things, but that ability sometimes to take big leaps, might be more difficult for a more seasoned person versus somebody who can, who just doesn't see all of the noise that, that we do.

I would suggest that there, they have. An edge early on. I do think though that you're, as I mentioned, like you, you have to go through this journey of failure. You have to actually experience failure, build a thick skin, and to be able to continue doing this over and over again.

[00:06:40] Sahil: And building in this AI world, what do you think outside of the tech has changed since you started building, since you started teaching and.

[00:06:48] David: The level of productivity is increasing by orders of magnitude. I don't know, maybe I can code a thousand lines of code in a day. I can do 10x that using, ChatGBT and other tools, similar tools. And so I think there's a benefit there, right? Is that. As an entrepreneur, I could now spit out an MVP and iterate and add new features much quicker.

The idea of of agile development and sprints, like that's going to come much easier because the ability to quickly iterate and test and evaluate isn't gonna become so much easier because the level of code and accuracy and all those things will be, you'll be able to do that much, much quicker, so there's a benefit for those who are really in tune with.

AI today and we're just really jumping in it. That is absolutely game changing. And that's why we know we're really right now we're in the fifth, entering the fifth industrial revolution where things are going to radically change. And I tell people this, and it's scary, not just one little area of change, it's not just, Entrepreneurs and computer scientists, it is literally every industry is going to radically evolve to a point where, it's almost like you're going to wake up one day in the next six months or a year.

And you will not recognize it's like looking back at the, Oh, I remember the nineties where I grew up in the nineties and. But it's, you're going to, it's like flipping a switch. You're not going to recognize the days of, all right. It's a code. What was coding like, you're not going to need to do that anymore.

You're going to be. People are going to be elevated to a higher point directors, architects, we're going to be giving orders, prompting, and the AI will be building that under the hood. That'd be of programming in higher languages. We're going to be speaking in higher languages and the AI is going to be the synthesizer who's going to turn that into action.

I don't know if. Computer science, as we think about it today, the programmers and so on those folks, I think over time, that type of design work, I think is going to slowly get phased out. And we'll be more, high level prompting. We'll be putting blocks together instead of actually going down and coding, the velocity in which change is happening.

If you follow the AI and what's going on right now in the industry, it's happening so rapidly that you blink your eye and it's just

[00:09:22] Sahil: jensen Huang, the founder of Nvidia has said that, students, young people, they shouldn't learn how to code because in an AI world, you won't need to.

But then Gary Tan, the CEO of Y Combinator, he still says there's an art to coding, almost like the digital camera didn't replace artists. So where do you see and what advice would you give to someone that's A young entrepreneur, where should they spend their time? Should they still learn how to code or is there something else that they should prioritize?

[00:09:49] David: I use another analogy in my class. And once upon a time, we used to we as humans used to get around through a horse and buggy and horses and biking. And and then the car was invented and the car really just allowed us to travel longer distances, connect with people, family, much quicker, much more efficient.

And it's. If the bicycle didn't disappear, we still ride the bicycle. Many people would still ride the bicycles to work. But more and more people obviously today are taking, cars and such to get around that might change again in the future. But so just because new technology comes in to augment or replace or improve what it did was it just allowed us to be more productive.

I can get from here to work and, 20 minutes versus, an hour. So there's more productivity. I can be working on other things, but I still enjoy biking and I love the exercise and I love, the feeling of it and I think the same way of coding, you will always need people who are developing the applications that other people need.

So I don't think that's not going to go, you're not going to have. For a while now, you're not going to have just AI building AI. There will be still the researchers and developers and stuff who are developing those next generation tools. So that's not going to go. One particular person will not be able to do the job of, 10 or 20 people.

And so you might need in a particular company, you'll cut down your work staff, where those other people go. They'll create new jobs. They'll create new companies. They'll do, there'll be other opportunities. What matters most is not the actual coding, but the critical thinking skills that go around it, right?

And you were talking, referring to it as an art, some of it as art. And that's absolutely true. I get the same sort of feeling, when I code, when I write a successful piece of code as I do, when I pick up a guitar, play the piano, it's, there is an element of creativity and such that same kind of feeling.

So I don't think that's going to go away. I do think though that as we elevate the critical thinking skills is really what matters. And and that's, by the way, just one of the things I know I'm rambling here, but you have the you have a lot of students who are just learning new languages.

They're picking up a language, which language should I learn? And one thing I tell my students is it really doesn't matter at this point because a lot of that is going to disappear. What matters most is you understand the underlying semantics the algorithms, the data structures, you understand.

What sort of technology to apply to what situation that critical thinking skill is important to be able to direct the AI or whatever it is to be able to to do implement the right sort of solution for your problems.

[00:12:34] Sahil: Your me, 23, know how to code the basics, or don't know how to code at all.

And want to build a tech startup, what do you do?

[00:12:42] David: Hire a CTO, who's going to be your go to do all of those things. We're not there yet. But I would say that you never want to outsource your IP. Your technology needs to be done in house. So you don't outsource it to some dev shop or, some people.

So it's really important to first, fundamentally, you got a tech startup. You need to bring in a technology person to help. Develop or oversee the development or actually go, hit the ground and do that, do the design work too. And that's often what I do too. I help startups and I come in there and I will build or architect the technology from the ground up, build the teams, launch the MVPs and get it to the next stage.

[00:13:24] Sahil: So what do you think are the most overhyped and underhyped parts about AI?

[00:13:28] David: You don't want to underestimate the, potential capabilities, because I think that's probably the biggest mistake. We do need to be fearful of what AI could do when it's put in the hands of the wrong people.

But we also need to be optimistic about the capabilities of problems we'll be able to solve, like solving cancer, solving new scientific problems identifying, threats and other things that I think could be really important. Where I think a lot of startups fall short these days is that.

They build their entire technology on top of, an LLM and they're just using an LLM to spit out something and that in and of itself is not a value. I think investors have caught on to this, right? But anyone could copy that. It's going to come down to your data. What data do you have that you bring that nobody else has or the knowledge base that you bring?

And so I think that has to be. Understood. If you just have an app that can talk to an LLM and it will give you it will set up tasks for you and so on anyone can replicate that, right? What can you do that's unique that others cannot do?

[00:14:37] Sahil: So you're really talking about what people consider to be ChatGPT wrappers.

And Sam Ullman has come out and said, if you're making like a wrapper on top of OpenAI, ChatGPT 5 will, in his words, Steamroll you. So what really is your definition of what makes a chat GBT wrapper and kind of something that will last the time that can't get steamrolled by open aI?

[00:14:58] David: It's what you do with the LLM at the end of the day that I think makes a difference.

As an example there's something called Agentic AI for those of that come across that yet, but, when you code it's an iterative process. You'll write some code and then you'll test it and maybe you'll get it right the first time, maybe while, and then it'll report a bug and you'll go back, you'll fix the bug and you'll try again.

Maybe there's another bug and, or maybe the performance is not as good. Many things in life is all about trial and error. And the gentic flow essentially is this iterative process. LLMs today are single, are basically single shot, right? You put in a prompt and spit something out. There is no iteration there.

And you can plug in tools and you can say, go skip the web and go find this information. But we all, it suffers often from hallucinations, sometimes it gets things wrong. So if you just accept it as ground truth on the first shot, I think that's where you're going to run into problems. Where you start to build newer capabilities is in this iterative flow.

How quickly can you iterate, validate, optimize, and so on, right? And go through that loop and. Pull in different resources and stuff. Now you're building in what's called LLM ops, and tying different sort of threads together to create something much more powerful. An LLM such as a GPT 5 is going to steamroll you.

That, yeah, it's going to have a much more powerful engine, but it's still going to suffer from the same problems that, GPT 4 and GPT 3 5 and all these other ones do, which is there's the hallucinations, the there's only a limited amount of knowledge it has, which is the stuff that it's scraped off of the public web.

It doesn't have private knowledge. And so you have to be able to bring those things to the table.

[00:16:50] Sahil: When you talk about like a chat GPT wrapper, what is enough added layers where you're building something that's not just, like you were saying. Someone can else just replicate it.

How do you know, like you're doing enough?

[00:17:01] David: It has to pass the smell test. It's a, I don't know. Anyone can necessarily tell you, but from internally, if you're the designer, the inventor if you are simply just asking the LLM to spit something out one time, that's not going to be efficient, right?

Let's suppose you have an LLM, you have your, let's say your, your Siri, your HomePod or your Alexa. And you say, what's the weather, right? And it reports the weather. That's not innovative. But if you said, what's the weather? And it says it's going to be super sunny out.

Let me turn down the shades. in your apartment for you, right? It has not just automation built into it, but it has knowledge of its environment, its surroundings to be able to take actions or suggest actions.

[00:17:50] Sahil: Mark Zuckerberg and metadata just came out with Lava 3. 1 and open source model. And obviously, open eyes, a closed model.

What is your take on the whole regulation of like open source models, having a few private companies run the most models, run like the larger model?

[00:18:08] David: I think it's good to have a balance of both. We learned a Google's epic failure with their release of the Gemma. It was programmed with diversity that was returning some very Incorrect historical knowledge or false knowledge.

And and so that, that was catastrophic for them as their, first model. And I think communities should decide what is the best sort of training, the best sort of model for their use cases. And so providing an open source model like a Llama3 that you could then fine tune that is, open source and so on, where you can do, do what you, whatever you need to do or training on what you need.

That is that's great for individual, let the individual communities regulate themselves. You do need private companies innovating and creating some of these things too. I, we're in a capitalistic society and I believe in capitalism and you should you should have private companies creating private versions of their tool.

And some people use them and others will look to use Some of the open source I will say as somebody who's evaluated many of these, there is a limitation on some of these open source and what their capabilities are. If you're creating a chatbot with a serious, serious capabilities of being able to have, really dynamic conversations.

Transcribed And you're not going to be able to do that with a Llama, 7B. You need state of the art hardware and technology to be able to run like a Llama 3. 1. That's very costly. And to be able to run that and to host that at its full capacity is not necessarily attainable right now for most companies.

So I don't even, just cause they released it. It doesn't mean it's really, quote unquote, available to the mass diversity and thought is really the most important thing putting it on the hands of a couple of tech companies to own the technology and what the LM can and shouldn't say is already proven to be an epic disaster and bad for Everybody.

And so my thought is the open source, I think, creates a balance.

[00:20:24] Sahil: And what about the negatives though? You consider deep fakes, but also AI can be used in more malicious ways than that. How do you, what are your thoughts on that? And counter that being the counterbalance to allowing open source, allowing anyone to have access to an LLM model and program it to do anything they want.

[00:20:41] David: Simple answer is you don't do anything. What you do is you come out with newer technologies to combat the security risks and identifying the deep fakes and so on. You don't shut down speech to protect speech. You just, you create more speech to combat deep fakes. Bad speech and the same thing I think with technology, we're just talking about people creating images and creating videos.

It, yeah, it sucks, but what we need is our tools to be able to recognize this is a this is a synthesized image or watermarked by the AIs that this has been synthesized versus this is a, and there's technology that you can do that. So those could be built into the LLM and which might be appropriate.

So you're not shutting down speech, but at least or thoughts, but you are watermarking its originality.

[00:21:34] Sahil: I do have one last question for you. Since you started your career in entrepreneurship. Has there been any routine that you've maintained throughout the years?

[00:21:43] David: You gotta have balance in life. I look at being a professor as half of that balance.

Being an entrepreneur sometimes can be also very, a very lonely trek. You're working in small teams you're maybe you're maybe you're solo, maybe your teams are all over, scattered all over. And, being able to enter into a room and teach and mentor and, converse and doing other things that's one part of the balance of, this heavy work.

So many things going on in this world is easy to get lost. But you got to get out there. You gotta. Connect with people. You gotta find your place. You can't just shut yourself in a little room in a basement and just code the rest of your life. And I think that's what makes a real, a true entrepreneur is somebody who really ventures out into the world.

[00:22:27] Sahil: I didn't make sure of the perfect ending. And with that, I'm Sahil Seth.

[00:22:31] David: I'm David Zaretsky.

[00:22:32] Sahil: Uninvested. Thank you.

[00:22:33] Speaker 3: This is a personal video. Any views or opinions represented in this video are personal and do not represent those of people, institutions, or organizations we may or may not be associated with in a professional or personal capacity.

The views expressed are for entertainment purposes only and not to be misinterpreted as actionable investment advice.

LISTEN ON

Recent Episodes

Previous
Previous

OpenAI's o1: The AI That Thinks | EP 35

Next
Next

Is GPT-5 Dropping Soon? OpenAI's Secret Project Strawberry | EP 33