DeepMind’s AlphaStar AI can now beat almost any human player of StarCraft II, one of my favorite video games of all time, according to the MIT Technology Review.
Its programmers figured out that it wasn’t enough to enable AlphaStar to play zillions of simulated games in its silicon brain, using them to teach itself how to win through a process called reinforcement learning. So, they equipped it to trigger mistakes or flaws in its competitors games so it could learn how to exploit their weaknesses.
AlphaStar doesn’t just know how to win StarCraft, it knows how to make its competitors lose.
Who knew that one of the first jobs obviated by AI would be video gamers, who are perhaps the ultimate digital natives?
Further, its turns out that reading imperfections in others is a very useful aspect of being intelligent, generally, as it also applies to assessing the variables and risks of things and situations. The algorithms could be applied to autonomous driving or the behavioral triggers for self-actuated robots, according to the MIT Review story.
But that also means they could apply to reading the weaknesses in people when it comes to making decisions to buy toothpaste or, more ominously, political choices. Imagine telling AlphaStar’s evil twin to go forth into the chat warrens of the social mediaverse and convince people that climate change isn’t real, or that a race war is.
I’m just bummed because StarCraft was so much fun to play, in large part because it kinda played itself every time you made a choice to collect a resource, build something, or go on the offensive.
I wasn’t prepared for it to figure out how to play us.
This essay originally appeared at DaisyDaisy