The investment circle is the best at media, and the media circle is the best at investment. This is an apt description of Marc Andreessen, founder of a16z, a leading investment firm. Starting from Netscape Netscape browser and then becoming a well-known venture capitalist in Silicon Valley, Marc Andreessen has experienced many waves of “.com”, social media, mobile Internet, etc., and is still active until now. At a time when AI is hot, Andreessen has added OpenAI, Mobius AI and other companies to his investment list. Outside of investing, Marc, a “tech optimist” who has always liked to share his views on social media, recently expressed the view that “AI will save the world”.
Speaking with Databricks CEO Ali Ghodsi at Databricks’ Data+AI conference on June 29th, Marc Andreessen chatted with Databricks CEO Ali Ghodsi about his views on the current development of AI and why he doesn’t think AI will bring about an existential crisis for the human race.
Here’s what Marc Andreessen had to say at the event
1. AI is becoming the ‘ultimate media’
The idea of Artificial Intelligence was actually conceived in the 1930s and 1940s. People have been thinking about AI for about 80 years. It seems to have always been with the computer industry and the Internet, and people have continued to find ways to do it, but it’s never been a major thing in the industry.
There’s a great book Rise of the Machines that tells the backstory of AI. in the ’30s, ’40s, and ’50s, it was called cybernetics. Even before the advent of electronic computers, there were arguments between people like John von Neumann and Alan Turing. At that time they knew that electronic computers would be built in the future, and people had been working on how to build them ever since the concept of the Babbage Difference Machine was born. The core issue of their debate was really about the nature of computers: should a general-purpose computer be what is now known as a von Neumann machine, i.e., should it execute sequences of instructions in a deterministic manner according to the programmer’s instructions? Or should it be based on a model of the human brain? The neural network paper was published in 1943, when they actually knew that computers could be built in a neuronal fashion.
There were a number of people who argued at the time that no, we shouldn’t use a von Neumann machine, we should just use a brain model. But at the time there were no chips, no data, and not all the underlying technology, so they couldn’t make it happen. Suddenly in the last five years there has been a major breakthrough and suddenly this approach has started to work and one of the most interesting questions is why is this happening now? It has a lot to do with the theme of this conference, and a big part of the reason is because of data.
It turns out that it takes a lot of data to make AI work. We have to make the Internet scale, we have to get access to the full corpus of the global web, as well as comprehensive crawler data fed into search engines, and we have to get access to all the image data, including Google images and videos and so on, in order to train these models. It turns out that they do work, and then of course, that means that to make AI work even better, more data now needs to be made available. So it feels like worlds like internet data and AI are colliding together and magic is happening.
There’s a Marshall McLuhan perspective that I agree with. Marshall McLuhan is a famous media theorist, and about 40, 50 years ago, he said something about every medium becoming the content of the next medium. So he said, when radio came along, what were they doing? They were basically reading newspaper articles. When television came along, what did they do? They basically televised offline lectures and stage plays. When the Internet came along, it suddenly became a platform that could encompass all previous forms of media, including television, movies, and everything else.
Artificial Intelligence is the ultimate example of this idea, where different media forms essentially become components of training AI. One of the major breakthroughs in AI right now is the concept of multimodal AI. So if you’re using ChatGPT today, it’s trained based on text; if you’re using Midjourney, it’s trained based on images, but the new AI that’s going to be released will be trained based on multiple media types at the same time. As a result, you will have AI that is simultaneously trained on multiple media such as text, images, video, structured data, documents, and mathematical equations. The AI will be able to work across all of these domains of data, all forms of media that have ever existed, data, they all matter.
2. AI training AI that can create as well as compute
The previous generation of AI will also be a source of data for the next generation of AI. The emergence of bigger and better AI, AI research now is basically about how to use human-created data to train AI. And then humans do what’s called reinforcement learning, where they’re basically tweaking the results of the AI, but a lot of the research now focuses on how to make AIs teach and train each other. So there’s going to be this laddering, cascading upwards, where AIs actually train their successors.
The current neural network, a new type of computer, is a probabilistic computer. What does that mean? If you ask the same question twice, it will give different answers. Even if you ask the question in a different way, it will give a different answer. It will also give a different answer if the training data is changed a little bit. If you praise it, or if you tell it to mimic some famous person to answer a question, or if you do all kinds of prompt engineering, it’ll give a different answer, and then one of the amazing things it can do is it can hallucinate.
If it doesn’t know the answer, it’ll make up an answer, and people see this and engineer minded people will be horrified, but for creative people they’ll marvel at the fact that, wow, computers can actually create things, we actually have a computer that creates fictionalized works of art, and that’s pretty amazing.
I talk to a lot of my friends and some of them say, “Well, I don’t know if I can use AI because I’m not sure if the answer is right.” To which I reply, “Well, have you ever worked with one?” If a person tells you something, you may also at some point want to double-check to make sure that what they said is in fact accurate. But the reason you interact with other people is because they have a mindset that you don’t have, and they create ideas that you don’t have.
Now we have both kinds of computers: the engineer type that outputs a definite result, and the type that can create.
What happens next is that they get integrated, and you end up with hybrid systems. You already have ChatGPT, and if you ask it math or science questions, it usually answers them incorrectly. But if you use the Wolfram Alpha plugin, combined with ChatGPT, it suddenly starts answering correctly. So I think there will now be a form of engineering that combines these two computational models and you will have computers that can both create and perform tasks.
3. AI will not end programmers
I have an eight year old, and the most emotionally significant thing about the development of AI for me is that from now on, every child, mine and everyone else’s, will grow up with an AI teacher, coach, mentor, counselor. It will be with them throughout their lives and will do everything possible to ensure that everyone reaches their full potential.
About a month ago, I introduced ChatGPT to my eight-year-old and installed it on his laptop. I told him that you can ask it any question and it will answer any question you have. He said, “Well, sure, isn’t that what computers are for? Of course it will answer all your questions.” Even though he didn’t understand the importance of it, I understood the importance of it. I remembered every step the computer industry had taken to achieve the point of being able to answer any question, and to him it was obvious. I think kids will grow up in a very different and better world.
I tend to think that really good programmers will still need long hours of training and understanding of the fundamentals of programming in the future, just as really good mathematicians will still need to be trained in math, even with calculators. So really good programmers will still completely understand everything from the ground up, but they will be more efficient than ever before, and they will be able to do more things in their careers.
What most programmers do in the future will be taken up a level. More and more, the job of being a programmer will be like being a programmer’s manager, rather than just writing all the code yourself. We’re all managers, managing AI. right now, we use tools like GitHub Copilot, where AI is helping to make suggestions, fix bugs, and so forth. As these systems become more complex, you as a programmer will be able to give them more complex tasks. You will be able to just tell them to write this code, write that code, do this, do that, and then it will go away and execute and report back to you.
My guess is that today you’re a human paired with an AI Copilot. My guess is that in the future it’s going to be one person paired with more than one of these AI Copilot. Maybe starting with two, and then basically five and ten. Maybe very skilled programmers will have 1,000 of these AI systems. And then, as a result of that, you’ll basically be able to effectively oversee a force of AI, and then the question is how much time, attention, and energy can you devote to that overall oversight. A lot of people who can’t code will also be able to actually program effectively. This trend has been going on for a long time, giving birth to a lot of low-code, no-code tools that enable ordinary people to write programs without needing a degree in computer science, and I think it’s going to be accelerated. So I think a lot of non-professional programmers will be able to create code.
There’s a classic fallacy in economics known as the aggregate labor fallacy, which is a zero-sum view of the world that there’s a certain amount of work that needs to be done, and if the machines get the work done, then there’s nothing left for humans to do. In reality, the opposite is happening. Basically, when machines are able to do the work instead of humans, you’re actually freeing people up to do more valuable things. So there was a time when 99% of the people were basically farmers. And then after the industrial revolution, there was a time when 99% of the people worked in factories. And today, from our perspective, a much smaller percentage of people work on farms and in factories, but there are more jobs overall because so many new needs have arisen and so many new businesses and industries have been created. As a result, I think this will lead to tremendous economic growth, which will lead to a lot of job growth and wage growth.
In addition, coding has such properties that basically the world will never be satisfied with it. There are always more programs that need to be written. There are always more things that need to be done with code. Everyone knows this, everyone in business knows this. No one will be satisfied with the idea of what they want their software to accomplish. What they lack is the time and resources to actually build the software they need. So my guess is that there will be tons of software produced, and that there will eventually be lots of people working in software development.
4. There is no such thing as “evil AI” that destroys humanity.
There is this recurring idea in human history that something will come along that will fundamentally change the human experience, and then it will either lead to utopia, the concept we call the ‘singularity’, or it will create the anti-utopia, the hellish planet where everything goes to hell. I’m an engineer by trade, and to me that sounds very much like a sci-fi plot. So I don’t think that’s actually the case.
One of the guys at Berkeley made a point that I really liked, he called what we’re doing as a species, as a civilization, ‘Slouching Towards Utopia’. I really like that term, and what it means is that basically things have been getting progressively better in terms of things like material well-being, health, and intelligence, and people have been getting progressively better, but they haven’t been getting better in a way that has led to an actual, literal utopia, but we’re slouching towards utopia in a way that, despite the fact that we’re in an imperfect, flawed, and degraded world, we’re still managing to improve the world to some extent. So this attitude is a cautious form of optimism, not a radical one.
There are currently two views on AI bringing about the end of the world. One view is that AI will declare its own goals. Like the episode of ‘Terminator’, it will wake up one day and decide to hate us. To that, my answer is that it’s not like a human being, it doesn’t have a consciousness, it doesn’t have a will, it doesn’t have any of those things. Then there’s another view, the so-called ‘AI doomsayers’, who believe that AI doesn’t need self-awareness or any form of ego to create scenarios that will destroy humanity.
For example, the famous ‘cookie cutter optimizer’ paper, which postulates a scenario where after someone tells the AI to make cookie cutters, the AI then decides that it needs to convert all the atoms on the planet, including the sunlight, all the atoms in everyone’s body, into cookie cutters. In order to maximize the number of cranks in the world, it will develop its own energy sources, master fusion technology, have its own space station, and have its own robot army. It will do everything it can to maximize the number of cookie cutters.
I’m left wondering whether that makes it an odd example of free will or no free will. And we have to consider practical limitations. Where will it get the chips to run the complex algorithms to make all the cookie cutters? Because as of today, we can’t even get chips to run the AI in our startups. I think maybe right now there’s an evil baby AI in Databricks’ labs that wants to rule the world, and so it’s already sent a purchase order to Nvidia, but it hasn’t gotten the chips at all (laughs). I think we should wait to see if those evil baby AIs show up before worrying too much about large AIs.
The reason to be optimistic about AI is because it deals with the concept of intelligence. We know a lot about human intelligence, and human intelligence has been one of the most important topics that social scientists have studied over the past century. The fact is that intelligence makes everything better when it comes to humans. This is a very important thesis, but it turns out that there is a great deal of research to support this idea. Intelligence basically means that people will be better. People with higher intelligence are more successful academically, they have more successful careers. Their children are more successful. They are healthier and live longer. They are also less violent. They’re better at dealing with conflict and better at solving tough problems. By the way, they’re less prejudiced, more open-minded, more receptive to new ideas. So basically, applying intelligence to humans is the one thing that makes everything better.
The world around us, including being able to meet and interact with each other in spaces like this and everything else we do, is not something that we woke up to one morning with all these wonderful buildings, electricity and everything else just waiting for us, but it’s something that humans have built step by step by applying intelligence. We use intelligence to build everything we like to make the world work. But we’ve been limited by our own abilities or data capabilities. Now we have the opportunity to apply machine intelligence to all of these endeavors, enhancing everyone’s ability to do things in the world.
5. Humans working on AI are heroes
Starting in the 1940s, AI scientists have essentially worked for 80 years without really getting paid. I remember studying computer science and aspects of AI in college. At the time, AI was like a fringe field, a theory that was being questioned. There was an AI boom in the 80s that didn’t work. It was a bubble burst, a very bad time. So by the end of the ’80s, AI had been seriously doubted. This is the fourth such cycle. There was a lot of hope for AI, but ultimately it didn’t materialize.
The AI scientists of that time worked in AI and computer science departments and labs, were born, grew up, got their PhDs, became professors, retired after 30 years of teaching AI, and worked their whole lives probably without any big results to show for it. Many of them have passed away.
They were doing research based on a set of theories and ideas that we now know work, but it took 80 years in between. The determination, the vision, the courage, the insight and the tenacity to go into a field where you would never get a return on your investment, people at the time would have wondered if they weren’t very lucid and didn’t think the work would work. But now that we know these ideas work, we’re amazed that, wow, they saw the future, they’ve always really understood what to do, it just takes time for the whole process to pay off. I put them in the category of legends.
I think the people who are trying to push AI right now can be categorized as heroes. The entire population of people working on AI research, including all the people who are at the conference today. I use the word heroes deliberately because we discussed in the article that we’re in this cultural moment right now – people are very angry about everything. I don’t know if people have noticed, but a lot of people are in a bad mood right now and are upset about a lot of things. The world is in some kind of emotional slump.
So as soon as anything new comes along, immediately there’s an argument about how bad it is, how terrible it is, how it’s going to destroy the world, how it’s going to ruin everything, and reading the newspaper reports of what’s going on it looks like a disaster. So I think that anybody who is making the world a better place through this technology, what they’re doing, I think they’re heroes. People have been working for hundreds of years and now they’re finally able to reap these AI benefits. So we’re really lucky right now. Every one of you can be a hero in the future.