DeepMind’s new Go-playing AI wins 90% of matches on four processors

The Verge has a write up on Google’s new DeepMind AI algorithm, AlphaGo Zero, which is self-taught to play Go by pitting itself against…itself. After three days of self-play, trained on four TPU (Tensorflow Processing Units). The prior version required 48 TPUs.

Read more via an article just released in Nature.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s