Will Regulation Kill Open Sourced LLMs?

Grow Your Perspective Weekly: Do we risk open source community with regulations?

Reading Time: 3min 10sec 

Will Regulating AI Kill Open Sourced LLMs?

💭 Why is this important?

Don’t think of 2 steps ahead; think of 5 steps forward.

As Yann LeCun and Andrew Ng have said, it’s important to distinguish between regulating technology (such a foundation model trained by a team of engineers) and applications (such as a website that uses a foundation model to offer a chat service, or a medical device that uses a foundation model to interacts with patients). We need good regulations to govern AI applications, but ill-advised proposals to regulate the technology would slow down AI development unnecessarily. While the EU’s AI Act thoughtfully addresses a number of AI applications — such as ones that sort job applications or predict crime — and assesses their risks and mandates mitigations, it imposes onerous reporting requirements on companies that develop foundation models, including organizations that aim to release open-source code. 

How can we protect open source while regulating to control bad actors?

In the U.S., a faction is worried about the nation’s perceived adversaries using open source technology for military or economic advantage. This faction is willing to slow down availability of open source to deny adversaries’ access. I, too, would hate to see open source used to wage unjust wars. But the price of slowing down AI progress is too high. AI is a general-purpose technology, and its beneficial uses — similar to other general purpose technologies like electricity — far outstrip the nefarious ones. Slowing it down would be a loss for humanity. 

What happens if the government controls open source tighter?

Many nations and corporations are coming to realize they will be left behind if regulation stifles open source. After all, the U.S. has a significant concentration of generative AI talent and technology. If we raise the barriers to open source and slow down the dissemination of AI software, it will only become harder for other nations to catch up. Thus, while some might argue that the U.S. should slow down the dissemination of AI, that certainly would not be in the interest of most nations. 

Never place your trust in us. We’re only human. Inevitably, we will disappoint you.

— Westworld

🎬 The New Accord for the Actors Explained

If you wondered what happens if the faces of the actors or actresses are used to generate movies, this one is for you:

This groundbreaking New Accord agreement ensures that actors’ consent and compensation are central when their digital likenesses are used in film production. Key provisions include:

  • Mandatory actor consent for the use of digital replicas.

  • Compensation for training AI models with an actor's performance.

  • Protection for deceased actors’ likenesses, requiring consent from their estates.

  • Regular reviews of AI's impact in the industry, fostering adaptive and responsive guidelines.

🔍 Behind the Agreement

The deal follows intense negotiations, reflecting the complex interplay between technology and traditional acting roles. It's not just a contract; it's a framework for future collaborations between humans and AI in the creative process. 

For the curious minds of the day

💸 Now that is said, here is what is new in the world of AI and automation:

  • Adept Introduces “Adept Experiments”: Innovating Workflow Automation. RPA KILLER IS HERE!
    Adept's AI-powered workflow builder enables users to automate complex or tedious tasks across various software platforms with simple language commands. Once you are approved, you can use Adept’s workflow builder via a Chrome extension.

  • vimGPT: Giving GPT-4V Access to Browser

    vimGPT allows you to control your browser via GPT prompting. It gives GPT-4V the ability to dynamically interact with web content via Vimium, keyboard-based web navigation Chrome extension

  • Samsung unveils “Gauss” generative AI model, set to debut in Galaxy S24 series
    The model includes language, coding assistant, and image generation sub-models. Samsung's move reflects a broader strategy to apply generative AI across multiple products, with a focus on delivering meaningful and personalized interactions for users. (Korean Times)

  • Adobe’s generated images of Israel-Hamas conflict slipped into news stories
    Adobe's stock image library is under scrutiny as AI-generated images depicting the Israel-Hamas conflict are being sold and subsequently used by news publishers as authentic representations. Despite being labeled as "generated by AI" in Adobe Stock, these images are often presented without disclosure when used in news articles. (The Register)

  • Meta restricts political advertisers from using generative AI
    The decision, revealed in updates in Meta's help center, aims to prevent misuse that could amplify election misinformation. Advertisers dealing with Housing, Employment, Credit, Social Issues, Elections, and sectors like Health, Pharmaceuticals, and Financial Services are currently barred from employing generative AI features. Other tech giants like Google have also implemented similar measures. (Reuters)

Up next in our series: The AI-powered revolution in Synthetic Biology. Explore its transformative impact, from food production to advancing longevity.

Stay Curious. Stay Informed.
Join us every week as we delve deeper into the challenges and triumphs of automation in the modern age.

New Episode Alert!

I had the absolute pleasure of sitting down with Jason Rosoff, the co-founder and CEO of Radical Candor. Radical Candor is the work culture methodology that established the work cultures at Google, Apple, and many other Silicon Valley companies.

From Steve Jobs and Eric Schmidt to Sergey Brin, Jason and his co-founder, Kim Scott, helped many companies establish how to build collaborative cultures that foster innovation and trust.

 

Reply

or to participate.