OpenAI Co-Founder John Schulman Departs for Rival Anthropic

Schulman’s Departure Marks a New Chapter in AI Alignment

John Schulman, a co-founder of the artificial intelligence giant OpenAI, announced his departure from the company in a post on the social media platform X late Monday. Schulman, a key figure behind the development of ChatGPT, has joined rival AI firm Anthropic. In his announcement, Schulman explained that his decision was motivated by a desire to deepen his focus on AI alignment and to return to hands-on technical work.

“This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work,” Schulman shared in his post.

Leadership Changes at OpenAI

Schulman’s exit comes amid a series of significant leadership changes at OpenAI. Greg Brockman, the company’s President and another co-founder, also announced that he is taking a sabbatical through the end of the year, as mentioned in his own X post on Monday.

Adding to the shakeup, Peter Deng, a product manager who joined OpenAI last year, has also left the company, as first reported by The Information. These departures follow other notable exits, including AI safety leader Aleksander Madry, who was reassigned to a different role in July, and chief scientist Ilya Sutskever, who left the company in May. Another founding member, Andrej Karpathy, departed in February and has since launched an AI-integrated education platform.

Elon Musk Revives Legal Battle Against OpenAI

The news of Schulman’s departure and the broader leadership changes at OpenAI come at a time when the company is also facing external challenges. Tesla CEO Elon Musk, who co-founded OpenAI but left the company three years later, has revived a lawsuit against the firm and its CEO, Sam Altman. Musk alleges that OpenAI has prioritized profits and commercial interests over the public good.

As OpenAI navigates these internal and external challenges, Schulman’s move to Anthropic signals a shifting landscape in the competitive AI industry.

ByteDance Challenges OpenAI’s Sora with New AI Video App Launch

ByteDance Enters Competitive AI Video Market

BEIJING, Aug 6 (Reuters) – ByteDance, the parent company of TikTok, has launched a new software tool that can generate videos from text prompts, marking its entry into a rapidly growing market. This move places ByteDance among a group of Chinese tech companies racing to develop AI-driven video creation tools, a market also targeted by OpenAI, the creator of ChatGPT.

Since OpenAI, supported by Microsoft, introduced its text-to-video model Sora in February, although it remains inaccessible to the public, Chinese firms have quickly developed and released their own versions of similar tools.

Jimeng AI: ByteDance’s Latest AI Innovation

ByteDance’s new text-to-video app, Jimeng AI, was developed by its subsidiary, Faceu Technology. The app is now available on the Apple App Store for users in China, following its earlier release on Android on July 31. This launch is part of a broader trend among Chinese tech companies to release similar models, with ByteDance joining the likes of Kuaishou and Zhipu AI.

Kuaishou, a leading Chinese video app, recently made its Kling AI text-to-video model available to a global audience. Meanwhile, Zhipu AI introduced its Ying video-generating model last month, followed closely by the launch of the Vidu app by another startup, Shengshu.

Faceu Technology operates under ByteDance’s Jianying business unit, known for the popular video editing app CapCut. Jimeng AI offers subscription plans starting at 69 yuan ($9.65) per month, with options for a single month or an annual subscription, allowing users to generate up to 168 AI videos per month.

Growing Competition in AI Video Creation

As Chinese tech companies continue to develop and release AI-powered video creation tools, the competition in this emerging market is intensifying. ByteDance’s entry with Jimeng AI signals its commitment to expanding its AI capabilities and keeping pace with its rivals.

AI

Copyright Group Shuts Down Dutch Language AI Dataset

Introduction: Action Against Unlawful Data Use

On Tuesday, Dutch copyright enforcement group BREIN announced the removal of a significant language dataset that was being offered for AI model training. The dataset contained unauthorized information gathered from various sources, raising concerns over copyright violations.

Unauthorized Content in the Dataset

BREIN revealed that the dataset included material collected without consent from tens of thousands of books, news websites, and Dutch language subtitles from numerous films and TV series. This extensive collection of copyrighted content led to the group’s swift action to prevent further misuse.

Challenges in Tracking Dataset Usage

Bastiaan van Ramshorst, Director of BREIN, expressed uncertainty about the extent to which AI companies may have already utilized the dataset. “It’s very difficult to know, but we are trying to be on time,” he told Reuters, emphasizing the importance of proactive measures to avoid future legal challenges.

Implications of the EU AI Act

Van Ramshorst also highlighted the forthcoming European Union AI Act, which will require AI companies to disclose the datasets used in training their models. This regulation aims to enhance transparency and accountability in the AI industry.

Global Context: Similar Cases in the U.S. and Denmark

The issue of copyright infringement in AI training is not limited to the Netherlands. In the United States, OpenAI, backed by Microsoft, is facing several lawsuits, including one from The New York Times, for allegedly using copyrighted material without permission. Similarly, in Denmark, the Danish Rights Alliance successfully forced the removal of a large dataset known as “Books3” last year.

Resolution: Cease and Desist Compliance

The individual responsible for offering the Dutch dataset agreed to comply with a cease and desist order and promptly removed the content from the website where it was available for download. BREIN did not disclose the individual’s identity, citing Dutch privacy laws.

Conclusion: A Warning to the AI Industry

BREIN’s actions serve as a reminder to the AI industry about the importance of respecting copyright laws. As regulations tighten and enforcement actions increase, AI companies must ensure that the datasets they use for training are legally obtained and properly documented.

Google

Google Expands AI-Powered Search Features to New Regions

Google’s parent company, Alphabet, announced on Thursday that it is expanding its AI-generated summaries for search queries to six additional countries. This move comes just two months after the company scaled back some of its AI capabilities due to issues that arose during the initial launch.

AI Overviews: A Brief History

In May, Google made its AI Overviews feature available to all users in the United States. This feature, which displays AI-generated summaries at the top of search results pages, was the result of a year-long trial of a more limited version. However, the feature faced significant criticism after screenshots of inaccurate answers, such as a pizza recipe listing glue as an ingredient, spread across the internet. There was also a widely circulated incorrect statement that former U.S. President Barack Obama is Muslim.

Google’s Response to Initial Criticism

Google quickly acknowledged the errors, referring to them as “odd and erroneous overviews.” In a late May blog post, the company outlined updates to the feature, including stricter guidelines on which queries would trigger AI-generated answers. Additionally, user-generated content from sites like Reddit was no longer being used as source material for these answers.

Positive User Feedback Despite Initial Hiccups

Despite the rocky start, Hema Budaraju, Google’s Senior Director of Product, emphasized in a recent interview that the quality of the AI Overviews is improving. According to internal data, users with access to the feature reported higher satisfaction levels and engaged in longer, more specific searches compared to users without the feature.

Global Expansion: New Countries and Languages

The AI Overviews feature will now be available in Brazil, India, Indonesia, Japan, Mexico, and the United Kingdom. It will be accessible in local languages, including Portuguese and Hindi.

Enhancements to the AI Overviews Feature

Google is also introducing more hyperlinks within the AI Overviews. Websites will now be displayed on the right side of the AI-generated answer, and the company is testing an update that will include links directly within the text of the overview. These changes are part of Google’s efforts to “prioritize approaches that drive traffic to relevant websites,” as noted in a blog post on Thursday.

Concerns from the Media Industry

These updates come amid ongoing concerns from the media industry about the potential loss of referral traffic due to AI-generated search features. However, Budaraju expressed confidence that the new update would benefit Google, consumers, and publishers alike.

Broader Context: Legal and Competitive Challenges

This announcement follows a ruling by a U.S. judge last week, which determined that Google holds an illegal monopoly on search. This ruling could lead to a trial that may result in the breakup of Alphabet. At the same time, Google faces increasing competition from AI advances by rivals, such as OpenAI, which is backed by Microsoft.

Conclusion

As Google continues to refine and expand its AI-generated search features, it must navigate both internal challenges and external pressures from legal and competitive forces. The company’s latest updates aim to enhance user experience while addressing the concerns of the media industry, setting the stage for the next chapter in AI-powered search.

AI’s Energy Demands Are Higher Than Anticipated

AI’s Growing Energy Appetite: A Looming Challenge

As generative AI tools like OpenAI’s ChatGPT become increasingly prevalent, their energy consumption is raising significant concerns. With billions of parameters and vast data requirements, these models depend heavily on massive data centers, which consume considerable electricity for both processing and cooling. Recent forecasts suggest that the expanding demand for advanced AI models could stretch energy resources further than previously anticipated.

Soaring Energy Demands for Data Centers

The Electric Power Research Institute (EPRI) has recently highlighted that data centers powering AI models could account for up to 9.1% of the US’s total energy demand by 2030. This marks a notable increase from the current 4%. Globally, the International Energy Agency (IEA) predicts that data center energy needs could double by 2026.

The report underscores that this surge in energy demand is largely driven by power-intensive generative AI models. For example, a single query to OpenAI’s ChatGPT consumes approximately ten times more electricity than a typical Google search. The energy demands are even greater for AI models involved in generating audio and video, which surpass previous benchmarks in their data requirements. According to Goldman Sachs, AI alone could account for 19% of data centers’ power needs by 2028.

Fossil Fuels and Data Centers: A Short-Term Solution

The rising energy demands of data centers pose a risk to global energy grids. Currently, data centers represent 1-2% of global power consumption, but this figure is projected to increase to 3-4% by 2030. In the US, home to about half of the world’s data centers, these facilities are expected to consume 8% of the nation’s energy by the end of the decade. The Goldman Sachs forecast reveals that over half (60%) of the energy required to meet this growing demand will likely come from nonrenewable sources, casting doubt on the feasibility of relying solely on renewables.

This development complicates earlier assurances from tech leaders like OpenAI’s Sam Altman, who had suggested that advanced AI could potentially reduce greenhouse gas emissions in the future. Altman, along with other Silicon Valley investors, has put $20 million into Exowatt, a startup aiming to use solar energy for powering AI data centers.

Towards Sustainable Solutions

In the face of these challenges, immediate solutions are crucial. The EPRI report advocates for increased efficiency within data centers, particularly by minimizing the energy spent on cooling and lighting. Cooling alone accounts for about 40% of a data center’s energy use. The report also suggests that incorporating backup generators powered by renewable sources could enhance the reliability and sustainability of energy grids.

“Transforming the data center-grid relationship from a ‘passive load’ model to a ‘shared energy economy’ could not only address the rapid growth of AI but also improve affordability and reliability for all electricity users,” the EPRI report notes.

As AI technology continues to evolve, addressing these energy challenges will be essential for balancing technological advancement with environmental sustainability.

How Would You Utilize a Robotic Third Thumb?

Reimagining Creativity and Productivity

Imagine the legendary guitarist Jimi Hendrix pushing the boundaries of sound with an additional thumb, or historic painters like Frida Kahlo and Vincent Van Gogh completing their masterpieces with greater ease. Such scenarios may soon become reality with the advent of a new 3D-printed robotic wearable called “The Third Thumb.” Designed to augment human capabilities, this device represents a significant step forward in wearable motor augmentation technology, aiming to enhance accessibility and functionality.

How the Third Thumb Works

Developed by Dani Clode from the University of Cambridge, The Third Thumb is a cutting-edge, 3D-printed robotic appendage controlled by the user’s toes. Here’s how it functions:

  • Design and Operation: The device is strapped to the wrist and sits on the opposite side of the palm from the user’s natural thumb, resembling an extended finger. It is operated via two sensors placed under the big toes: the right toe controls horizontal movement and the left toe controls vertical movement. The device’s wireless, proportional controls translate toe pressure into thumb movements, allowing for precise manipulation of objects.
  • Potential Applications: Beyond aiding those who have lost limbs, The Third Thumb could significantly enhance various biological functions, potentially making complex tasks easier and more efficient. Researchers envision it improving productivity and safety across diverse fields.

Broad Testing and Impressive Results

The Third Thumb has undergone extensive testing, with researchers presenting it at the 2022 Royal Society Summer Science Exhibition. Over five days, 596 participants, ranging from ages 3 to 96, tested the device. Key findings include:

  • Ease of Use: An impressive 98% of participants were able to don the device and manipulate objects within one minute of use. The tests included grasping pegs from a pegboard and handling various foam objects, with over half of the participants successfully completing both tasks.
  • Inclusivity: The results showed no significant differences in performance based on age, gender, or handedness, highlighting the device’s broad applicability and effectiveness across diverse user demographics.

Ethical Considerations and Future Prospects

The researchers emphasize the importance of inclusivity in the design of wearable technology. As Professor Tamar Makin notes, ensuring these devices are accessible to all, particularly marginalized communities, is essential for equitable technological advancement.

  • Design Philosophy: Dani Clode underscores that The Third Thumb’s design aims to be as inclusive as possible, addressing potential disparities in technology use and ensuring that advancements benefit a wide range of users.
  • Real-World Applications: Initial demonstrations of The Third Thumb reveal its potential for practical tasks—such as squeezing fruit, pinching thread, and even playing guitar—showcasing its versatility and utility.

Conclusion

The Third Thumb represents a groundbreaking development in wearable technology, offering new opportunities for enhancing human capability. While learning to use the device may initially seem unusual, the recent research indicates that it is both intuitive and effective. As technology continues to evolve, The Third Thumb could play a significant role in expanding the boundaries of what is possible for creators and everyday users alike.

Meta AI Will Continue Using Your Content Despite Outrage on Instagram

The Revelation and Backlash

Last month, Meta revealed a surprising shift in its use of Instagram content. The company admitted that images uploaded by users, including original artworks, are now utilized to train its AI image generator. This disclosure, made public by Meta executive Chris Cox during a Bloomberg interview, has ignited significant backlash from creators. Over 130,000 Instagram users have shared a message on the platform protesting against Meta’s use of their data for AI training. However, these objections reflect a misunderstanding of the terms users agreed to when joining the platform.

The Reality of Copyright and User Consent

Creator’s Discontent

The protest began with a viral Instagram template allowing users to quickly share a message stating: “I own the copyright to all images and posts submitted to my Instagram profile and therefore do not consent to Meta or other companies using them to train generative AI platforms. This includes all future AND past posts. @Instagram get rid of the Ai program.”

Understanding User Rights

While the sentiment is clear, it overlooks a crucial detail: Instagram’s terms of service grant Meta extensive rights to user content. Although Instagram doesn’t claim outright ownership, users provide Meta with a broad license to use, modify, and create derivative works from their content. This license explicitly includes the use of content for training AI models.

Peter K. Yu, a Texas A&M Regents Professor of Law and Communication, explained that the license users grant is non-exclusive, royalty-free, transferable, sub-licensable, and worldwide. This means that even though users retain copyright, Meta has significant freedom to use the content in various ways, including for AI training.

How Meta Uses Public Data

Training AI with Public Content

Meta’s AI training process involves a vast array of data, including public posts from Instagram and Facebook. Chris Cox clarified that while Meta does not use private data, public posts, comments, and captions contribute to AI model development. This practice is consistent with Meta’s privacy policy and terms of service, which were updated last month to reflect their approach to AI training.

Comparison with Competitors

Meta’s extensive user base provides it with a rich source of data, giving it a competitive edge over other AI developers. Unlike competitors, Meta’s access to millions of users’ publicly shared content allows it to refine its AI tools more effectively. This data usage mirrors past practices where publicly available posts significantly contributed to AI advancements.

The Creator’s Dilemma

Legal and Practical Implications

Artists and creators have expressed frustration, with some threatening to leave Instagram if their concerns aren’t addressed. Despite their protests, the legal framework does not offer much recourse for content shared on social media platforms. Users’ options to opt out of AI training are limited, and private account settings do not retroactively affect previously public posts.

Current Options and Limitations

Meta offers tools to control data usage, such as requesting the removal of third-party data or objecting to its use in AI training. However, these measures do not apply to first-party data shared directly on Meta platforms. Users can make their accounts private to limit data access, but this does not affect data already collected from public posts.

The Bigger Picture of Consent Online

The Complexity of Digital Consent

The confusion surrounding consent highlights a broader issue with modern internet practices. Helen Nissenbaum, a technology philosopher, notes that dense terms of service and opaque data privacy practices leave users uncertain about what they are consenting to. A 2017 Deloitte survey found that 91% of US consumers agree to terms of service without fully reading them, underscoring a critical gap between user expectations and actual data practices.

The Evolving Landscape

As AI technology and data practices evolve, understanding and managing consent becomes increasingly complex. While users’ objections to Meta’s data usage reflect a desire for greater control, the current system often leaves them with limited options to protect their digital content.

OpenAI Disbands Team Focused on Preventing Rogue AI

OpenAI has recently disbanded its Superalignment Team, which was created to address potential existential risks associated with artificial intelligence. The decision, confirmed today by Wired and other sources, comes less than a year after the team’s establishment. Jan Leike, a former co-lead of the team, revealed the dissolution in a detailed thread on X, following his cryptic resignation announcement on May 15.

A Brief History of the Superalignment Team

The Superalignment Team was launched in July 2023, with the goal of managing the risks posed by superintelligent AI. OpenAI initially described this initiative as essential, noting that while superintelligent AI could potentially solve major global challenges, it also posed serious risks including the potential for human extinction. The team, led by Leike and OpenAI co-founder and chief scientist Ilya Sutskever, was tasked with developing strategies for AI governance and alignment.

Leadership Departures and Internal Disputes

Leike’s resignation and the subsequent disbandment of the team highlight ongoing internal disagreements at OpenAI. Leike cited fundamental disagreements with OpenAI’s leadership regarding the company’s core priorities as a key factor in the team’s dissolution. Sutskever, who also co-led the Superalignment Team, has since left the company, reportedly over similar concerns. The remaining team members have been reassigned to other research groups.

Contradictions in OpenAI’s Approach

Despite the emphasis on AI risks, OpenAI, along with competitors like Google and Meta, continues to showcase advancements in AI technology. Recent releases include GPT-4o, a multimodal generative AI system capable of generating lifelike responses. This emphasis on cutting-edge developments contrasts with the company’s warnings about the dangers of “rogue AI.” Critics argue that while AI companies push forward with new technologies, they may be neglecting serious safety concerns.

The Broader Implications and Industry Reactions

The exact reasons behind the shutdown of the Superalignment Team remain unclear, but recent internal power struggles suggest significant differences in opinion on how to advance AI technology safely. Critics of the AI industry point out that the technology, while not yet self-aware, is already impacting issues such as misinformation, content ownership, and labor rights. As AI systems become more integrated into various sectors, society faces growing challenges in managing their consequences.

In summary, the disbandment of OpenAI’s Superalignment Team underscores the complex balance between technological innovation and safety. As the AI industry evolves, it will be crucial for companies and regulators to address these challenges while ensuring that advancements do not outpace the measures needed to mitigate potential risks.

AI Contenders Enter the Political Arena

In the UK, an unconventional political contender named “Steve” is stirring up the political landscape. Advocating for a four-day work week and incentives for switching to electric cars, Steve isn’t your typical candidate. In fact, Steve is not human at all. This AI-powered chatbot offers Brighton Pavilion voters the chance to engage via online voice chat. Behind Steve, and other AI candidates like him, are creators who believe that advanced language models from companies like OpenAI and Google might better represent voter views than traditional human politicians.

The Legal and Practical Challenges

Running an AI for political office presents significant hurdles. The legality of an AI candidacy is uncertain, and the feasibility of a software program handling everyday political tasks is questionable. Even if these AI candidates manage to overcome these challenges, they must prove they can avoid common AI issues such as fabricating facts and perpetuating biases. Currently, it seems more likely that these AI candidates will be remembered as gimmicks rather than serious contenders.

AI Steve: A New Breed of Politician?

AI Steve, vying for a UK parliamentary seat as an Independent, is based on British businessman Steve Endacott. Created by Neural Voice, a company led by Endacott, AI Steve engages voters in conversations about policies and allows them to suggest new ones. Endacott envisions AI Steve as a tool to enhance representative democracy by holding up to 10,000 simultaneous conversations and using these interactions to shape policy. However, AI Steve’s inability to physically vote or attend events means Endacott will take on these roles himself, raising legal and ethical questions.

Virtual Integrated Citizen: The Proxy Candidate

In Cheyenne, Wyoming, voters might soon encounter the “Virtual Integrated Citizen” (VIC) in the mayoral race. Developed by local librarian Victor Miller, VIC operates on OpenAI’s GPT-4 and claims an “IQ” of 155. Due to local laws prohibiting nonhumans from holding office, VIC cannot be directly elected. Instead, Miller will appear on the ballot and intends to let VIC handle all decision-making if elected, effectively serving as a human proxy for the AI.

Yas Gaspadar: The Symbolic Protest Candidate

In Belarus, an AI chatbot named Yas Gaspadar is running for parliament. Created by Sviatlana Tsikhanouskaya, leader of the anti-authoritarian opposition, Yas Gaspadar advocates for democracy, education investment, and free elections. This AI candidate, designed as a protest symbol, cannot be arrested, making it a potent symbol against the current regime.

The Promise and Peril of AI Politicians

AI candidates theoretically offer the ability to analyze extensive documents and generate informed policy recommendations. If AI can address current issues like misinformation and bias, it might provide a more accurate representation of voter interests. However, AI models are still prone to errors and “hallucinations,” where they generate incorrect information as facts. Furthermore, AI candidates might struggle with the human aspects of politics, such as negotiation and personal interaction.

Voter reception to AI candidates remains mixed. Recent examples, like a chatbot mishap in New York, highlight public skepticism. Polls indicate significant concern about AI’s role in spreading misinformation and its overall impact on daily life. As AI continues to evolve, its place in politics will be closely watched, but for now, these candidates may be more about spectacle than substantive change.

Explore the New AI Features Enhancing Siri on Your iPhone

The Next Wave of AI Enhancements for Siri on Your iPhone

As artificial intelligence continues to evolve, tech giants like Google, Microsoft, and Samsung are introducing groundbreaking tools. Apple is stepping into the spotlight with its own suite of AI features, branded as Apple Intelligence. This new suite aims to enhance Siri, Apple’s long-standing digital assistant, making it more intuitive and powerful.

With the launch of iOS 18 in September, Apple will begin rolling out these updates, though some features may take up to a year to fully debut. Here’s a closer look at what Siri will offer:


Enhanced App Control

Siri is about to gain deeper integration with your iPhone’s operating system. With iOS 18, you’ll be able to:

  • Rename Documents: Easily rename files in Pages through Siri commands.
  • Manage Tabs: Close Safari tabs or switch between open ones with a simple voice command.
  • Edit Photos: Apply enhancements to images in your Photos app.
  • Switch Cameras: Toggle between front and rear cameras seamlessly.

Additionally, Siri will understand the current content displayed on your screen, enabling you to execute tasks directly related to what you’re viewing. For instance, if you’re looking at an address, you can instruct Siri to “add this address to my contacts,” and it will know precisely what you mean.


Improved Natural Language Understanding

Siri’s ability to comprehend and respond to natural language is set to improve significantly. Expect:

  • Contextual Awareness: Siri will remember the context of your previous interactions, allowing for more fluid conversations. You can ask about the weather and then set a reminder for a related trip without repeating details.
  • Personalized Responses: Siri will understand and remember personal references, such as recognizing “mom” in a flight inquiry and pulling information from your communications.

This improved context-awareness will also help Siri manage commands like “play the podcast my wife sent me yesterday” or “show me the files James shared last week,” streamlining your access to information.


Typing and Enhanced Features

In addition to voice commands, Siri will offer more options for text-based interaction:

  • Type-to-Siri: This feature, accessible via a double tap on the navigation handle, will now be more convenient, allowing you to type commands just as you would speak them.
  • Integration with Apple Intelligence: Siri will be linked to other new AI features, such as generating AI images, creating custom emojis, and adjusting text tones. These functionalities will be controllable via Siri as well as through touch interactions.

Future Enhancements

The upcoming update includes more than just these core features:

  • Voice-Controlled Smart Devices: Future updates may enable Siri to control compatible smart home devices, including robot vacuum cleaners.

These improvements mark the most significant upgrade to Siri yet, potentially transforming how you interact with your iPhone. With Apple Intelligence leading the charge, managing tasks and accessing information will become more intuitive and efficient.