AI Job Loses To Spike More In 2024 + AI Can Now Give Birth To AI + Google To Include Exclusive AI For Pixel 9
Good evening!
Welcome to the 58th edition of the Quantumics Weekly Roundup.
In this edition, we’ll take a deep dive into the latest AI trends.
And as usual, we’ll explore the latest news and information in data and AI, business, and tech.
Let’s go!
Around 44% Of Jobs To Face Layoffs In 2024
Elon Musk argues that artificial intelligence will eventually eliminate the need for human jobs entirely, a claim seemingly supported by recent figures. A survey by ResumeBuilder reveals that 37% of 750 business leaders using AI reported worker replacement in 2023, and 44% anticipate layoffs in 2024 due to AI efficiency. However, some experts challenge Musk's perspective. Julia Toothacre, a strategist at ResumeBuilder, acknowledges the limitations of their findings, suggesting that not all businesses embrace technology like larger companies. While layoffs occur, AI also empowers leaders to reshape and redefine job roles. Alex Hood, Chief Product Officer at Asana, believes that AI can significantly reduce the time spent on non-core work activities, such as status updates and cross-departmental communication, leading to positive outcomes. He contends that the statistics on AI-induced layoffs may be driven more by fear than actual reality, lacking the necessary nuance.
As per the State of AI at Work 2023 report by Asana, employees suggest that 29% of their work tasks could be substituted by AI. Notably, Asana advocates for "human-centered AI," aiming to amplify human capabilities and collaboration rather than outright replacing individuals. The report emphasizes that as individuals gain a better understanding of human-centered AI, they become more optimistic about its positive impact on their work.
Globally, white-collar and clerical workers make up approximately 19.6% to 30.4% of the total employed population, according to the United Nations. Over the years, analytical and communication tools have reshaped knowledge work, and the report positions "generative AI" as another evolutionary step in this ongoing continuum of change.
Scientists Reveal That AI Can Replicate Without Humans
A groundbreaking research project, published on Friday by a team of scientists, reveals that artificial intelligence models can now autonomously generate smaller AI systems without human intervention. This innovative initiative, a collaboration between Aizip Inc. and researchers from the Massachusetts Institute of Technology and various University of California campuses, represents the first of its kind. In essence, larger AI models, akin to those powering ChatGPT, have the capability to produce smaller, more specialized AI applications designed for practical use in everyday scenarios. These specific models could contribute to advancements such as enhancing hearing aids, monitoring oil pipelines, and tracking endangered species.
Currently, the process involves using larger AI models to create smaller ones, likened to a bigger sibling assisting its smaller counterpart in improvement. Yan Sun, CEO of AI tech company Aizip, explained that this marks the initial phase toward the significant task of self-evolving AI. This development is indicative of AI models being able to construct other AI models.
Yubei Chen, a researcher and co-founder of Aizip, echoed this sentiment, emphasizing the surprising discovery that the largest model can automatically design smaller ones. Chen, also a U.C. Davis professor, envisions a future where large and small models collaborate to build a comprehensive intelligence ecosystem.
The range of AI models generated includes those capable of identifying human voices amid background noise, monitoring pipeline data to prevent issues proactively, and analyzing sensor data for tracking wildlife. Chen emphasized that their technology represents a breakthrough by introducing a fully automated pipeline, capable of designing an AI model without human intervention, from data generation to deployment and testing.
In a recent demonstration, the researchers showcased the first proof of concept, revealing that a specific model could be automatically designed throughout the entire process, from data generation to deployment and testing, without requiring human intervention.
Google Pixel 9 Series To Arrive With A Pixie AI Assistant
Google has introduced Gemini, an innovative foundational model with the ability to undertake various tasks in natural language, computer vision, and audio domains. While Gemini is currently available for developers, Google is also in the process of integrating it into its own products and services. One notable project in development is Pixie, a new AI assistant exclusive to Pixel devices.
As reported by The Information, Pixie will leverage Gemini to access and analyze data from various Google products on the user's phone, including Gmail and Maps. The objective is to create a more personalized and context-aware version of the Google Assistant capable of executing complex and multimodal tasks. For instance, Pixie could suggest directions to the nearest shop based on a user-taken photograph or facilitate the booking of a table at a restaurant they have searched for.
According to the report, Pixie is anticipated to make its debut with the Pixel 9 and 9 Pro, slated for release in late 2023. It's important to note that Pixie differs from Assistant with Bard (AWB), another AI feature presented by Google in October. AWB functions as a conversational interface capable of generating natural language responses using Gemini's capabilities. Google has stated that AWB will be accessible on Pixel devices early next year and may also be compatible with Samsung and iOS devices.
Pixie appears to be an evolution of AWB or an alternative application harnessing the capabilities of Gemini. The decision to make Pixie exclusive to Pixel devices may be a strategic move by Google to distinguish its flagship phones from competitors. The report also hints at Google's plans to extend Gemini-based AI features to its lower-end phones and wearable devices, including smartwatches.
Google is reportedly exploring an ambitious project involving a pair of glasses that could leverage Gemini to recognize objects the wearer is looking at, offering guidance or information accordingly. For example, these glasses might assist users in learning to play a musical instrument, solving a math problem, or using a tool.
However, it's important to note that this project is still in the early stages of discussion, and there is uncertainty regarding its eventual realization. Google's previous venture into augmented reality glasses was with Google Glass, which was discontinued in 2015, although an enterprise edition continued until earlier this year.