Google Creates Human Brain Using 16,000 Computers
June 26, 2012 1 Comment
Google has an ultra-secret lab called Google X where they work on super technology projects such as self-driving cars and computerized glasses. The latest project from the Google X labs is the development of a huge network of computers that are able to learn on their own, similar to a human brain.
The process is called “machine learning” and it is an attempt from Google to create a computer that is able to learn on its own what is present in a YouTube video. The Google research team networked 16,000 computers and fed them random thumbnail images, each one extracted from 10 million videos on YouTube. By culling millions of YouTube videos and scanning thumbnail images from the videos, the computer cluster was able to determine what a “cat” is on its own, rather than being programmed to find cats in those videos.
The neural network taught itself to recognize cats, which is not a frivolous activity. The Google scientists and programmers were happy with the result, as this experiment has performed better than any previous effort. The computer was able to double its accuracy in recognizing objects in a challenging list of 20,000 distinct items.
The Google brain has been able to identify objects through repetition, much like an actual human brain does. The network is still tiny compared to the human visual cortex, which is a million times larger in terms of number of neurons and synapses say the researchers at Google. The project has been moved from the Google X laboratory and is now going to be pursued in the division of the company’s search engine business. While they are happy with their results so far, Google researchers still remained hesitant to say they have cracked the technology that will allow machines to teach themselves.
If machines in the future are able to teach themselves, I have always wondered if there will be “good” computers and “bad” computers. much like we have have criminals and good Samaritans in our social structure. Would machines create their own social hierarchy if given the capability? Would some go rogue and create their own agendas? Who is to say they wouldn’t? If they could teach themselves, randomly, some computers could teach themselves to focus predominately on evil, while others choose a more righteous path. This will certainly pose a lot of ethical questions and dilemmas as the research and technology continues to develop. If machines are given “free will” they are technically able to do anything they wish, whether or not they know and understand the difference between good and evil.