Top VCs: AI's biggest risk is not pursuing it as hard as it can

Author: Wu Xin

Recently, Mark Anderson published an article on the company's website why AI will save the world, boldly questioning the rationality of the current calls for AI regulation, and systematically criticizing the arguments on which these calls are based.

This article combines the author's previous interviews, articles and even the opinions of economists and politicians he admired. It is divided into five parts to make a more in-depth analysis and interpretation of its core content, helping readers understand why he thinks optimism is always the safest Choose, and at the same time see the risks inherent in pessimism and skepticism. These views may seem crazy to many people, and it is difficult to think about them in this way. But as Mark Anderson said, by debating with people, you can roughly build a model of other people's thinking, and you can think about problems from their perspective, and your way of thinking will become more objective and neutral.

Pessimism is prevalent in almost every society. ChatGPT swept the world, and Open AI CEO Sam Altman appeared in the US Congress hearing, calling for the regulation of AI.

He then signed a risk statement with Stuart Russell, Geoffrey Hinton, Yoshua Bengio and others.

Soon, Musk also appeared in a joint letter signed by thousands of people, calling on the AI lab to immediately suspend research. At this time, Mark Anderson, the helm of the world's top venture capital company a16z, will always stand up and become the cheerleader who "sings the opposite".

There is a paradox at the heart of American culture: In theory, we love change, but when change actually materializes and manifests itself, it is subject to massive backlash. I'm very optimistic, especially when it comes to new ideas. Mark Anderson thinks he's probably the most optimistic person he's ever met. "For at least the last 20 years, if you bet on the optimists, you're generally right."

He has every reason to think so. In 1994, Mark Anderson came to Silicon Valley for the first time, founded Netscape and completed the listing in the shortest time.

Anderson, who sat barefoot on the throne, later appeared on the cover of Time Magazine and became a Silicon Valley model of the myth of technological wealth, attracting countless latecomers. In a way, he is the one who ignited the fire in Silicon Valley, and the kind of optimist described by the American quantum physicist David Deutsch in "The Beginning of Infinity", who hopes to achieve by creating knowledge progress, including unforeseen consequences of such progress. Pessimism is different. They will take pride in their children's observance of appropriate patterns of behavior and lament at every real or imagined novelty. It tries to avoid everything that is not confirmed safe.

Very few civilizations have survived by being more cautious about innovation. As David Deutsch writes in The Beginning of Infinity, in fact, most civilizations that were destroyed enthusiastically implemented the precautionary principle (avoiding everything not known to be safe to avoid catastrophe.) Everything has been stable and unchanged for the past hundred years, which never really seems to happen. Skeptics are always wrong. Mark Anderson said it too.

**01. AI Regulation: Who Benefits? Who is damaged? **

Mark Anderson calls himself an "AI accelerationist". A believer who hopes to accelerate the social process related to AI to resist resistance and bring about great social change will naturally be full of doubts about the call for regulation. "There is a view that government regulation is well-intentioned, benign, and properly implemented. This is a myth." Mark Anderson has long believed that one of the ills of the American system is regulation. The government continues to formulate laws and regulations, so that many Similar to laws like "No alcohol sold on Sundays" and "Men are not allowed to eat kimchi on Tuesdays". Regulatory economist Bruce Yandle came up with a concept in the 1920s that was used to explain problems with government regulation: the Bootleggers and Baptists Theory. For example, Yandle believes that the passage of alcohol prohibition was not only driven by Baptists (Baptists, whose religious background convinced them that alcohol had harmed society), but also by the support behind bootleggers. They support increased regulation by the government to reduce competition from legitimate merchants—since consumers could not get drunk in the marketplace under Prohibition, they naturally turned to bootleggers. The theory of bootleggers and Baptists points out that Baptists provide the moral high ground for so-called regulation (the government does not have to find high-sounding reasons), while bootleggers quietly persuade politicians behind closed doors (collaborating interests), such an alliance Make it easier for politicians to support both groups. **The theory also states that such alliances lead to sub-optimal legislation, and while both groups are satisfied with the outcome, it may be better for society as a whole to have no legislation, or different legislation. ** Mark Anderson borrows this theory to show why regulation with some good motives often does bad things.

“Often the result of such reform campaigns is that the smugglers get what they want—regulatory capture (regulators become servants of a few commercial entities), segregated competition, cartel formation—in a well-motivated Baptists wonder, where did their drive for social progress go wrong?" he wrote in a recent essay, Why AI Will Save the World.

In the field of artificial intelligence, "Baptists" are those believers who really believe that artificial intelligence will destroy human beings. Some true believers are even innovators of this technology, and they actively advocate all kinds of strange and extreme restrictions on AI. "Bootleggers (smugglers)" are AI companies, and those responsible for attacking AI and fueling panic (on the surface, they appear to be "Baptist Will"), such as "AI security experts", "AI ethicists", and "AI risk researchers" who are hired to do some doomsday predictions. “In practice, even when Baptists are sincere, they are being manipulated and used as cover by bootleggers to further their own interests,” Mark Anderson wrote in a long essay. Things that happened." “If there are regulatory barriers to AI risks, then these bootleggers (smugglers) will get what they want — a government-backed cartel of AI suppliers protecting them from new startups and The impact of open source competition.”

02. Unemployment Theory and "Fixed Pie Fallacy"

Since he is wary of regulation, Mark Anderson naturally disagrees with many of the arguments put forward in support of regulation. But he did not deny the discussion value of some topics. Like, is technology eating all jobs? Income inequality, and the debate about upending human society. ** It is a common economic fallacy to think that market activity is a zero-sum game. That is, suppose there is a fixed pie where one party can only gain at the expense of the other. ** Automation leads to unemployment, which is a kind of "fixed pie fallacy". Mark Anderson pointed out in a long article, "That is, at any given time, the amount of labor is fixed, either by machines or by humans. If it is done by machines, then humans are unemployed." but it is not the truth. To give a simple example, the owner of a garment factory buys a large number of machines. The machines themselves require labor to produce, creating jobs that would otherwise not exist. After the money for the machine was "returned", the owner of the garment factory made excess profits because of the cost advantage. There are many ways to spend this money-expanding the scale of the factory, investing in the supply chain or buying a house, high consumption, no matter how it is spent, it provides employment opportunities for other industries. Of course, the garment factory's cost advantage will not last forever. In order to compete, rivals will also start to buy machines (so machine production workers get more employment opportunities). There are more and more coats, and the prices have been lowered. Garment factories are not as profitable as before. When more and more people can afford coats at a lower price, stimulating consumption, the garment industry as a whole will employ more people than it did before the introduction of machines. Of course, it is also possible that after coats are cheap to a certain extent, consumers will spend the saved money on other aspects, thereby increasing employment in other industries. “When technology is applied to production, productivity also grows, inputs are reduced and output increases. The result is that the prices of goods and services are lowered, and we will have additional spending power to buy other things. This increases demand, driving new products and New industries, creating new jobs for people previously displaced by machines,” writes Mark Anderson. "When the market economy is functioning properly and technology is freely introduced, it will be a never-ending upward cycle. A larger economy will emerge, with higher material prosperity, more industries, products and jobs." Mark Anderson wrote. What does it mean if all existing human labor is replaced by machines? "Productivity will grow at an unprecedented rate, existing goods and services will become nearly free in price, consumer purchasing power will soar, and new demand will explode.

Entrepreneurs will create a dizzying array of new industries, products, and services, and employ as many AI and human workers as possible to meet all new needs. Assuming AI again replaces human labor, the cycle will repeat itself, fueling economic development and job growth, leading to a material utopia that Adam Smith never dared to imagine. " Human needs are endless, and technological evolution is a process of continuously satisfying and defining the possibilities of these needs.

Each new productivity will take a different form, says Carlota Perez, an economist who studies technological change and financial bubbles, but ultimately it doesn't necessarily mean that there will be fewer jobs overall, but rather that the definition of work happens in a way that changed. Conversely, if logical consistency is strictly adhered to, we should not only regard all new technological advances as a disaster, but all past technological advances should also be regarded as equally terrible. If you think machines are the enemy, then you should want to go back and relax, right? Following this logic, we keep going back to where it all started - subsistence farming, wouldn't it be better if you made your own clothes?

**03. Who is causing injustice? **

In addition to machines causing human unemployment, social injustice caused by technology is another argument for people to call for the regulation of AI. “Assuming AI does take away all jobs, good and bad. This would lead to massive wealth inequality, as the owners of AI get all the economic rewards and the average person gets nothing.” Anderson's explanation for this is simple. Would Musk be richer if he only sold cars to the rich? Would he be richer than this if he only built cars for himself? of course not. He maximizes his profits by selling cars to the world—the largest market. Electricity, broadcasting, computers, the Internet, mobile phones, and search engines—the makers of these technologies aggressively lowered prices until they became affordable for all. Likewise, we already have access to state-of-the-art generative AI from New Bing, Google Bard, etc. for free or low cost. It's not because they're stupid or generous, it's precisely because they're greedy - expanding the market size and making more money. So instead of technology driving the concentration of wealth, technology ends up empowering everyone more and capturing most of that value.

**Inequality is indeed a big social problem, but it is not driven by technology, it stems from us not allowing AI to be used to reduce inequality. **Sectors of the economy, particularly housing, education, and healthcare, tend to face the greatest resistance to AI adoption. As shown in the graph below, the blue curve represents industries that allow technological innovation to improve quality while driving down product prices, such as consumer electronics, automotive, and home furnishing. In March of this year, Mark Anderson wrote in a blog Why AI Won't Cause Unemployment.

This chart shows changes in service prices in a dozen major sectors of the economy, adjusted for inflation.

The red section represents industries that do not allow the introduction of technological innovations (thus driving down prices). You see, the prices of education, healthcare, and housing are all going to the moon. “Industries represented in red are heavily regulated by the government and the industry itself. These industries are monopolies, oligopolies and cartels with every impediment to change you can imagine: formal government regulation and regulatory capture, price fixing, Soviet-style pricing, occupational licensing etc. Technological innovation in these sectors is now practically banned.”

We're entering a world of fragmentation—where a flat-screen TV that covers an entire wall costs $100, and a four-year college degree costs $1 million. So, what happens over time? Prices rise for regulated, non-tech products; prices fall for less regulated, technology-driven products. The former expands, the latter shrinks. At the extreme, 99% of the economy will be a regulated non-tech sector, which is where we are headed.

04. Speech supervision and slippery slope effect

The fear that the technology we create will rise up and destroy us is deeply embedded in our culture. As to whether such fears are based on any rational basis, and to what extent they can be distinguished from cults, the answer remains open. Anderson classifies them as a type of logical misclassification—where things of a particular category are presented as if they belonged to a different category. "AI is a machine, like your toaster, it doesn't come to life," he wrote. Still, "if killer bots don't catch us, so will hate speech and misinformation." You see, the "ghost" calling for regulation of social media continues to haunt the age of AI. Every country, including the United States, makes certain content on social platforms illegal. Examples include child pornography and inciting real-world violence. As such, any technological platform that facilitates or generates content will be subject to some limitations. Those who support regulation argue that generative AI should generate speech and ideas that are beneficial to society, and prohibit generated speech and ideas that are harmful to society. Anderson cautions that there is a "slippery slope inevitable" in doing so. The so-called slippery slope effect means that once a bad thing or problem starts, it is likely to get worse and worse. If it is stopped, it will intensify and the consequences will be unimaginable. “Once there is a framework in place to limit extremely bad content—for example, hate speech—government agencies, activist groups, and non-government entities jump into action to make changes to any speech they deem a threat to society and/or their personal preferences. Censorship and repression on a grand scale, even in outright criminal ways,” he wrote. This phenomenon has been going on for 10 years on social media, and it continues to heat up. Last March, the New York Times editorial board published an article, America Has a Free Speech Problem. A Times View/Siena College poll found that 46 percent of respondents said they felt less free to talk about politics than they did a decade ago. 30% said they felt the same way. Only 21% said they felt freer, despite the dramatic expansion of voices in public squares via social media over the past decade. "When societal norms around acceptable speech are constantly changing, and when harm is not clearly defined, these restrictions on speech can become arbitrary rules with disproportionate consequences," they wrote. And conservatives have used the idea of harmful speech to serve their own ends. And for Anderson, waking up every morning to see dozens of people explaining to him in detail on Twitter that he's an idiot is pretty helpful: By debating with others, you can roughly build a model of other people's thinking methods, and you can think about problems from their perspective, and your way of thinking will become more objective and neutral. He cautions that those arguing that generative AI should be aligned with human values are a very small fraction of the global population, “a characteristic of America’s coastal elite, which includes many who work and write in the tech industry.” "If you object to imposing niche morality on social media and AI through the constant reinforcement of voice codes, then you should also be aware that the fight over what AI is allowed to say/produce will be bigger than the fight over social media censorship more important. AI has the potential to be the controlling layer for everything in the world. How it is allowed to work is probably more important than anything. You should be aware that right now a small group of isolated social engineers are under the ancient guise of protecting you, letting their morals determine how the AI works. "

05. The realest and most terrifying risk

If none of the above fears and worries are real risks, then what is the biggest risk of AI? In his view, there is one final and real risk to AI, and possibly the scariest, biggest risk: the United States does not win global AI dominance. ** To that end, “we should push AI into our economies and societies as quickly and as hard as possible, maximizing its benefits to economic productivity and human potential.”

At the end of the long article, he proposed a few simple plans.

  1. Large AI companies should be allowed to develop AI as quickly and aggressively as possible, but they should not be allowed to achieve regulatory capture (regulators become servants of a few commercial entities, a kind of corruption), and they should not be allowed to build a A government-protected cartel insulated from market competition by falsely claiming the risks of AI.

This will maximize the technological and social returns to the incredible capabilities of these companies, the jewels of modern capitalism. 2. AI startups should be allowed to develop AI as quickly and aggressively as possible. They should be allowed to compete. If startups don't succeed, being in the market itself will continue to motivate big companies to do their best -- either way, our economy and society will win. 3. Open source AI should be allowed to proliferate freely and compete with large AI companies and startups. In any case, there shouldn't be any regulatory hurdles to open source.

Even if open source doesn't kill companies, its wide availability will be a boon to students around the world who want to learn how to build and use AI to be part of the technologies of the future, and will ensure that AI is accessible to everyone, no matter who they are or how much money they have. 4. Governments, in partnership with the private sector, actively use AI to maximize society’s defenses in every area of potential risk. AI can be a powerful tool for problem solving, and we should embrace it.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 1
  • Repost
  • Share
Comment
0/400
ChongXiLeYevip
· 2023-06-15 07:25
The point of view is very good, support, give you a thumbs up!
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)