Next-Gen Artificial Intelligence Will Be Able to ‘See’ Better than Humans

In a bid to push the boundaries of computer technology ever further, a team of scientists from the University of Texas has taught an AI how to ‘see’ and interact with the surrounding environment just human beings.

Though still far from super-intelligent robots bent on world domination, this next-gen AI may prove to be crucial in search-or-rescue operations or situations that call for speed and precision.

Are we closer to a Skynet-type scenario?

Project leader Professor Kristen Grauman declared that the preliminary results indicate that deep-learning AIs can be taught to interact with the surrounding environment and react based on visual cues.

As for the uprising of intelligent killing machines, the Professor and her team stated that the technology is still in its infancy, making this scenario highly unlikely.

According to Grauman, the AI developed in Austin’s computer labs employs a deep-learning algorithm that works in the same manner as the neural pathways in the human brain.

Although the AI features the latest advancements in computer tech, it’s a challenge to teach it new ‘tricks.’ It’s one thing to see the environment, but something entirely different to and integrate visual cues in a context which can be understood by a non-sentient entity.

The team stated that teaching the AI to ‘see’ is a trial-and-error process, one that requires many hours and hundreds of thousands of 360-degree picture and video clips.

As for the results, Grauman didn’t go into many details but mentioned the fact that the AI responded fairly well to perception and navigation tests.

So, where does this all lead? The Professor and her team want to piece together an AI smart enough to tackle any kind of environmental challenge.

This means giving it the ability to develop its own pattern that will enable this machine to interact with the visual world, not just analyze it.

What are the implications of this project?

For years, AIs have been successfully used in a variety of tasks, from analyzing street traffic to rooting out hackers and identifying vulnerable spots in software. However, this type of system could have much more tangible applications.

For instance, in search-and-rescue operations, a machine capable of rapidly mapping and analyzing the environment could help rescuers reach their victims faster.

Still, for this to work, the Austin-born AI should have a mechanized version, one capable of moving and making decisions on the go, without input from a human user.

Probably the best analogy, in this case, would be an AI-powered version of a bomb-disposal robot – they’re versatile, can get across any type of terrain, and has very precise movements.

Currently, the team’s working on creating an improved version of the machine. This means feeding it more scenarios, pictures, videos, and stress-testing it.

Wrap-up

Should we fear that the machines will take over and eliminate all life on Earth? Certainly not. We’re still in the Discovery Age of computer tech, which means that we still have a very long way to go before we can achieve the Asimovian positronic brain.

What do you think about Austin’s ‘seeing’ AI? Head to the comments section and let us know.

About Daniel Sadler

Old-school PC gamer, poetry buff, cat lover, tech wiz. His writing career began almost two decades ago when he modestly acknowledged that hindsight or, lack thereof, can compromise security. He enjoys spending quality time with his friends and family. Most of his friends refer to Daniel as a "man of a few words, but, man, what words!" His interests include cybersecurity, IT, blogging, and, of course, everything related to technology.

Leave a Reply

Your email address will not be published. Required fields are marked *