Meta Pushes Ahead on AI Development Despite Risks
Honestly, Meta’s approaches to both AI and VR development and integration are wildly conflicted, based on its internal statements and notes on such over time.
On one hand, it’s adding generative AI everywhere, and prompting you at every turn to generate images, and get answers to questions that you never even thought to ask via is AI tools.
Yet, at the same time, Meta’s warning of the dangers of such, and how we need to be increasingly wary of AI generations that will be more and more difficult to discern from the real thing.
That’s what Instagram chief Adam Mosseri has been warning about today, noting, in a post on Threads, that:
“Whether or not you’re a bull or a bear in the technology, generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly.”
Mosseri says that Meta has a role to play in this, by labeling AI generated content as best it can. But he also notes that people need to take more responsibility for assessing such in-stream.
“It’s going to be increasingly critical that the viewer, or reader, brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to *always* consider who it is that is speaking.”
But Mosseri, of all people, knows that people aren’t going to do that. Over and over again, we’ve seen social media hoxes gain traction, to the point where established scientific facts, like the world being a sphere, are arguably less accepted than they were in the past.
So while it’s one thing for Mosseri to say that users will need to be more careful in assessing such, he knows that people just won’t, and that generative AI then has the potential to cause significant harm via social apps.
Yet, Meta is still pushing for more AI generated content.
Meta CEO Mark Zuckerberg recently noted that he expects content on Facebook and IG to be mostly AI generated in the near future, which is why Meta’s adding more and more AI creation tools into its apps.
Meta CTO Andrew Bosworth is also keen to push ahead, noting that the evolution of AI has shown them the way forward for the next stage, and that Meta is now looking to put its “foot on the gas” in AI development.
Yet, we don’t know the impacts of such.
We don’t know, for example, how harmful AI generations might be, in terms of misinformation and manipulation. Meta did recently note that the anticipated wave of AI-generated content in the U.S. election didn’t happen. But that doesn’t mean that AI fakes won’t cloud our perceptions in future.
And in terms of AI companions, and conversational AI in tools like Meta’s Ray-Ban glasses, do we have any assessment on the true harms that could be caused by people eschewing human relationships, in favor of generated personal engagement?
The risks here are similar to social media itself, which we only started talking about in retrospect. Only now are governments looking to restrict access to social media for young users, due to concerns around harmful behaviors. Only now are we seeing regulators and security officials look to remove a foreign-adversary owned social app due to concerns that it could be used to sway public opinion.
These are just some of the harms that social media has potentially caused, and that potential has been enough to prompt widespread government action. Yet, it’s taken us years to get to this point in the discussion, where we’re actually assessing these as potentially harmful activities.
Social media was initially considered a novelty, and thing for kids, a harmless distraction. Till it wasn’t.
And now, AI and VR devices are being considered in much the same way.
That’s not to say that technological development is inherently bad, but again, Meta’s perspective here seems to sway significantly, from raising the alarm, to encouraging participation.
But really, what we need is proactive assessment of potential impacts before we go too far, not after. Because once you have a billion people engaging in VR, and chatting with custom AI bots, the impacts will become very clear. But by then, it’ll be too late.
Source link