Meta, wholly owned by Facebook, is doubling down on a “open source” approach to the technology that is escalating the AI race and posing new hazards by making its cutting-edge artificial intelligence technology freely available to the public for study and developing new products.
Technology is not your ally. We are. Join the newsletter of The Tech Friend.
Everybody will be able to use the highly sophisticated Llama 2 “large language model” for free, according to a Tuesday announcement from Meta. It can be downloaded directly from the company or accessed through cloud providers like Microsoft, Amazon, and AI start-up Hugging Face. Making it open source allows businesses and researchers to view the underlying code and tweak it for their own purposes or even incorporate it into their own products.
The release of Llama 2 might be a “watershed moment,” according to Matt Bornstein, a partner at the venture capital firm Andreessen Horowitz. He stated that the model’s skills are comparable to more current iterations of OpenAI’s software.
The action might encourage more rivalry in the expanding AI market, which is already controlled by OpenAI, Microsoft, and Google. Llama 2 could help smaller businesses who lack the resources to pay those leaders in AI for access to their algorithms. The technology might also be used by criminals, governments, and other bad actors to develop their own potent AI capabilities. Images depicting child sexual abuse have already been produced using other open source AI algorithms.
The choice will widen the gap between proponents of making future AI technology open source and opponents. Google and OpenAI have rejected complete transparency, noting the dangers of criminals abusing the technology or advancing it in ways that put people at greater risk. Open source is crucial, according to Facebook and a group of start-ups like Hugging Face and Stability AI, to prevent the powerful new technology from further enmeshing the internet giants and stifling competition. Facebook lacks the cloud software division that Google and Microsoft do, preventing it from adding artificial intelligence (AI) features to its existing products and charging for them.
This gives Meta an opportunity to at least be a player, according to Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University. “Meta has kind of been in the shadow,” he added. The irony in this situation is that Google essentially followed this technique for its Android operating system while it was striving to catch up to Apple’s iOS.
Facebook has made significant financial investments in AI over many years, just as Google, Microsoft, and OpenAI. Yann LeCun, an outspoken and renowned AI researcher considered as a pioneer of the subject, heads its generally regarded AI lab, which is a global leader in the field. LeCun and other Meta leaders have argued that these worries are exaggerated and run the risk of making policymakers tighten their controls on the technology that could benefit humans. Other corporate executives have cautioned that if AI surpasses human intelligence, it could become an existential peril for humanity.
According to Mark Zuckerberg, CEO of Meta, “open source drives innovation because it enables many more developers to build with new technology.” Because more individuals can examine open-source software to find and address any problems, it also enhances safety and security. We’re open sourcing Llama 2 because I think a more open ecosystem would enable further advancement.
However, detractors claim that using open-sourced AI models could result in technology being abused. Llama was made available by Meta to a small number of academics earlier this year, only for the model to be stolen and used for everything from sexually explicit chatbots to drug development. Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) wrote to Facebook CEO Mark Zuckerberg in June, claiming that in the short time that generative artificial intelligence applications have become more accessible, they have already been abused for problematic content, such as pornographic deep fakes of real people and malware and phishing campaigns.
The senators noted that “Meta’s decision to distribute LLaMA in such a reckless and permissive manner raises important and difficult questions about when and how it is appropriate to openly release sophisticated AI models.”
Tuesday, Meta announced that its most recent AI model had undergone “red-teaming” exercises, in which human testers tried to persuade it to commit errors or produce inappropriate content before teaching it to steer clear of those kinds of responses. Additionally, the business requests that users make a commitment not to use the platform to support terrorism, produce child sex abuse content, or practice discrimination.
“If I’m a regulator,” said Chakravorti, dean of the Tufts School of Business, “I’m looking at this and wondering ‘is the genie being let out of the bottle here?'”
During a company event on Tuesday, Microsoft Chief Executive Satya Nadella also addressed the collaboration to distribute Facebook’s AI through its cloud business. Nadella revealed that a new version of the Bing chatbot would enable business users to ask the bot questions about the data that is specific to their organization and use it more easily at work.
Analysts in the development sector have been waiting for Microsoft to release pricing information for some of its AI tools in order to assess the potential financial impact of AI on the company. Following the news, Microsoft’s stock increased by almost 5%.
In an effort to establish itself as a leader in the generative AI boom surrounding the new generation of chatbots and picture creators, Meta, which has just fallen out of the top 100 most valuable tech companies in the world, is making strides in this direction. Recently, Zuckerberg and other executives have bragged about the company’s investments in computing infrastructure and AI research, as well as new products like an internal productivity assistant, generative AI-based advertising, and a new photo creation tool.
The AI revelations come after months of weak financial performance and a litany of difficulties Meta’s firm is now facing. The company’s digital advertising division was harmed by new privacy regulations from Apple, increased inflation, and a post-pandemic slowdown in e-commerce industry growth. More than 20,000 employees have been let go by Meta during the past six months as part of a bigger initiative to flatten the workforce and improve productivity. Nevertheless, despite the company’s efforts to cut costs, stock prices have increased significantly this year.
In addition, Meta has been outspoken in opposing the predictions made by a growing number of influential AI leaders, such as Elon Musk and Google’s chief AI researcher Demis Hassabis, who claim that the technology will advance so swiftly that it will surpass human intelligence within ten years.
Nick Clegg, president of Meta Global Affairs, has pleaded with regulators not to panic over impending apocalypse scenarios and hastily ban AI models completely, contending that some of the potential “existential threats” that detractors have raised are merely hypothetical and still decades away. Instead, Clegg has advocated that regulation of AI should emphasize maintaining the technology’s openness and accessibility.