Can Google win the AI war?

 


The advancement of artificial intelligence (AI) has raised concerns among scientists, ethicists, and science fiction writers about the possibility of AI systems developing skills independently of their programmers' intentions. These concerns were recently reignited in an interview with James Manyika, Google's SVP for technology and society, who revealed that one of the company's AI systems had taught itself Bengali without being trained to know the language.

Manyika disclosed that with minimal prompting, the AI model can translate all of Bengali, an unexpected outcome that Google's experts have labeled a "black box" of AI. Sundar Pichai, Google's CEO, acknowledged that some aspects of how AI systems learn and behave still surprise experts. He added that the company has "some ideas" as to why this is the case, but more research is needed to understand how it works fully.

During the interview, Scott Pelley questioned Pichai about the logic of releasing a system that its own developers don't fully understand to the public. However, Pichai responded that the human mind is also not fully understood, and the same applies to AI.

One of the significant issues in AI's development is the production of fake news, deepfakes, and weaponization, often with unwarranted confidence, known as "hallucinations." Pelley inquired about Google's Bard AI system and whether it was experiencing hallucinations. Pichai confirmed that all models have the same problem, and the cure is to create more robust safety layers before deploying more capable models.

Pichai has long been an advocate for comprehensive global regulation of AI. Other tech leaders, like Elon Musk, CEO of Tesla and Twitter, have called for a pause in the development of more powerful models. In China, new rules for AI have already been established, while in Europe and the US, the regulatory process is still in its infancy.

As AI systems continue to evolve, it is essential to ensure that they are aligned with human morality. To achieve this goal, AI developers must overcome the challenge of ensuring that their systems are transparent and can be understood, as opaque systems may behave unpredictably, leading to undesirable outcomes. Additionally, creating robust safety layers, as Pichai suggested, will help mitigate the risk of AI hallucinations, which pose a threat to public trust and safety.

Google can leverage AI's potential by incorporating it into its services and products. AI systems can help Google to personalize its search results and offer more relevant advertisements, resulting in a better user experience. Additionally, AI can assist in Google's research and development efforts by improving the accuracy of predictions and recommendations.

While AI's unexpected outcomes continue to pose a challenge, they also present an opportunity for innovation and improvement. Google and other tech companies can use AI to their advantage by investing in research and development to understand how it works better and developing systems that align with human morality. At the same time, it is crucial to ensure that these systems are transparent and have robust safety layers to prevent AI hallucinations and other unwanted outcomes.

Comments

Popular posts from this blog

AI and data annotation: the hidden labor behind the AI revolution

Here are the skills Business Analysts possess

This will fundamentally change the fast-food industry