Elon Musk launches xAI to take on OpenAI

Elon Musk launches xAI to take on OpenAI

Image credit: Daniel Oberhaus, Creative Commons, Wikipedia.

Elon Musk, at the helm of Tesla, SpaceX, and now Twitter (or simply “X”), is looking for a new project. Apparently, he’s got enough spare time.

Rather fittingly for an entrepreneur obsessed with cutting-edge technology, the next big Elon business venture will be to join the new AI gold rush and possibly change the way we approach AI.

With generative AI emerging as a juggernaut of potential, we’re now seeing the big players assert dominance, attempting to carve out the largest piece of the pie.

The current state of play for AI

At the moment, the big player in the room is OpenAI – the creator of the revolutionary chatbot, ChatGPT. OpenAI is not currently owned by a single entity, but funded by a diverse group of investors. The most prominent of these investors is none other than Microsoft.

Of course, such a lucrative opportunity is attracting immense attention, and we’re seeing other large corporations enter the space, notably Alphabet (Google), Apple, Adobe, and Meta.

Now Musk has entered the building. It’s interesting to note, however, that he was one of OpenAI’s original investors when it launched back in 2015, but no longer has any stake in the organisation.

The potential uses and benefits of generative AI seem almost limitless. Generative AI can craft human-like responses and create output such as text, imagery, and music. It can learn, solve problems, create simulations, and produce complex digital models; it’s predicated on the digestion of huge swathes of inputted data.

The rapidly emerging concerns around ethics and the implications of runaway tech

There are a lot of questions to be answered around the runaway nature of generative AI and how we can mitigate the potential for negative outcomes. There’s also a fair bit of debate around exactly what those potential negative outcomes could be.

Vocal on the matter is Musk himself, and a group of industry experts who called for a six-month pause in developing more-powerful systems in a 30 March 2023 open letter citing risks to society. (Interestingly, however, launch of xAI has come around sooner than that though…)

What does ChatGPT say about its own potential for harm?

It seems only reasonable that we should ask ChatGPT what it thinks of its own potential for harm, don’t you think? Typing into a blank text field, I asked ChatGPT the following question:

‘What are the top three most likely negative outcomes for humanity through the use of ChatGPT?’

Its response?

“The use of AI systems like ChatGPT presents potential benefits but also raises concerns about negative outcomes. The three most likely issues include:

Misinformation Spread: AI-generated content, including from ChatGPT, could disseminate false information and disinformation on a large scale, leading to confusion and social unrest.

Privacy and Security Risks: AI chat systems could compromise user privacy by inadvertently sharing sensitive information, and if hacked, could expose personal data, jeopardizing security.

Social Isolation and Dependence: Excessive reliance on AI for interactions might diminish social skills, lead to emotional dependency, and contribute to loneliness and isolation.

These outcomes can be addressed through responsible development, regulation, and user education, fostering a balanced integration of AI that maximizes benefits while minimizing potential drawbacks. Collaborative efforts among researchers, developers, policymakers, and society are crucial to ensure AI’s positive impact while mitigating risks.”

Interesting…

What’s the stated Elon difference then?

So, what’s Elon going to do differently? Without concrete examples of how his offering will be unique, it’s a tricky question, but statements about aims and concerns from the horse’s mouth aka the world’s richest man emerged in a Spaces discussion on Twitter. And from this commentary, it seems that Elon is on board with setting limits and creating technology that will be “pro-humanity”.

“If I could press pause on AI or really advanced AI digital superintelligence I would,” he said “It doesn’t seem like that is realistic so xAI is essentially going to build an AI … in a good way, sort of hopefully.”

This in itself poses a problem though. Who decides what’s moral or pro-humanity? Musk went on to say: “If you programme a certain reality [into an AI] you have to say what morality you are programming. Whose decision is that?”

Instead, he intends to propose an AI model that is “maximally curious” about humanity rather than having programmed moral guidelines.

“From an AI safety standpoint … a maximally curious AI, one that is trying to understand the universe, is I think going to be pro-humanity,” he mused.

I suppose time will tell. This is certainly a space to be intensely monitored over the coming years.

Leave a Reply

Your email address will not be published. Required fields are marked *