Google Builds Artificial Brain Which Can Recognize A Cat

The Google X laboratory has invented some pretty cool stuff: refrigerators that can order groceries when your food runs low, elevators that can perhaps reach outer space, self-driving cars. So it’s no surprise that their most recent design is the most advanced, highest functioning, most awesome invention ever… a computer that likes watching YouTube cats?

Okay, it’s a bit more advanced than that. Several years ago, Google scientists began creating a neural network for machine learning. The technique Google X employed for this project is called the “deep learning,” a method defined by its massive scale. In layman’s terms, they connected 16,000 computer processors and let the network they created roam free on the Internet so as to simulate a human brain learning.

Stanford University computer scientist Andrew Y. Ng, led the Google team in feeding the neural network 10 million random digital images from YouTube videos. The machine was not “supervised,” i.e. it was not told what a cat is or what features a cat has; it simply looked at the data randomly fed to it. Ng found that there was a small part of the computer’s “brain” that taught itself to recognize felines. “It basically invented the concept of a cat,” Google fellow Jeff Dean told the New York Times.

So Google may have created a machine that can teach itself. But what Ng and his team have done is not as new as you may think. Over the years, as the scale of software simulations has grown, machine learning systems have advanced; last year, Microsoft scientists suggested that the “deep learning” technique could be used to build computer systems to understand human speech. This Google X machine is the cream of the crop—twice as accurate as any other machine before it. However, “it is worth noting that our network is still tiny compared to the human visual cortex,” the researchers wrote, “which is a million times larger in terms of the number of neurons and synapses.”

After “viewing” random pictures from random YouTube videos, the neural network created a digital image of a cat based on its “memory” of the shapes it saw in the images. The cat the computer created is not any specific cat, but what the computer imagines to be a cat. Plato had his Forms, and now Google has its computer-generated cat image.


Blowfish12@2012 blowfish12.tk Author: Sudharsun. P. R.

Advertisements

Android 4.1 Jelly Bean: Tour & Features Walkthrough [video]


Blowfish12@2012 blowfish12.tk Author: Sudharsun. P. R.

Google Now assistant knows what you need, right now

Yesterday’s Google I/O event saw the company make a big push into hardware with the announcement of a new tablet, media player and a sky-diving demonstration of its augmented reality glasses, but the search giant also leveraged its software chops and massive piles of user data to cook up a rival to Apple’s Siri: Google Now.

While the iPhone’s personal assistant is designed to listen to and answer your queries, Google Now supposedly knows what you want before you even ask. Coming as part of the latest Android 4.1 update, it uses everything Google knows about you to provide relevant information at all times – a vision that Google engineers first outlined in 2010.

For example, if you’ve got an appointment entered into Google Calendar, Now will use your transport preferences in combination with your current location and local traffic conditions to let you know when to leave so you arrive on time. Or, if you are scheduled to take a flight that has since been delayed, Google Now will tell you there is enough time for your regular lunchtime gym workout.

Of course, for Google Now to work you have to be prepared to hand over your entire life to Google’s services, but many Android users have already done so. Google also says that users will be able to opt-in to the level of personalization they prefer, meaning it should be possible to avoid any embarrassing notifications based on your late-night web searches.


Blowfish12@2012 blowfish12.tk Author: Sudharsun. P. R.

Augmented reality glasses look a step ahead of Google

Augmented_reality_glasses

I’m sitting in a bar watching a video of a baseball player. That’s unusual enough in the UK, where the sport has minimal following, but I’m not watching on a wall-mounted flatscreen or even my smartphone. The slugger’s swing plays out on a piece of glass just 2 millimeters thick, part of a prototype augmented reality (AR) glasses system created by display technology firm Vuzix.

“We basically make monitors – really hard-to-build monitors,” says Vuzix’s Clark Dever. The prototype is currently just a single lens jury-rigged to a tiny portable projector, but the results are already impressive. The video image takes up around a quarter of the entire lens and appears to hover around 10 centimeters in front of your face, so it is easy to see your surroundings at the same time. The prototype image is quite bright, though, making the real world look dark in comparison.

Images are displayed on the lens using optical waveguide technology. Light from the projector enters the side of the lens and is split into vertical and horizontal components that pass through tiny optical channels in the glass before being reconstructed in the opposite corner.

These channels are just about visible in the working prototype, but Dever also shows me a newer lens with finer channels, which looks like pure glass. The waveguide technology places a lower limit on the size of the lens, though, meaning the first generation of AR glasses will resemble large 1980s-style shades.

Vuzix’s background is in military hardware: previous versions of its glasses (without the waveguide technology) are already used by the US army. The special-forces unit that killed Osama bin Laden used a single-eye non-transparent lens to watch drone surveillance footage in real time on the ground. In the next six months the company plans to release a single-eye version of its new transparent lens for military and industrial applications, with full consumer glasses to follow next year.

In doing so, the company hopes to beat Google to the punch, as the capabilities of the search giant’s own AR glasses seem to have diminished since they were first announced in April. It remains to be seen just what you will be able to do with Vuzix’s glasses, though – Dever readily admits that it is a hardware, not a software company, but it is currently working with developers to create apps for the glasses.


Blowfish12@2012 blowfish12.tk Author: Sudharsun. P. R.