11- Apr2018
Posted By: DPadmin
22 Views

Google Images update: Captions added to images, pulled from the page title tag 

To add more context to image results, Google will now display a caption with images in mobile Images search results.

Google Images search results continue to evolve — from the rollout of badgeslast summer to the related searches box this past December and the removal of the “view image” and “search by image” buttons last month. Google has been rapidly expanding visual search features.

Beginning today, Google Images results will now include captions for each image. The rollout is global and will be available for mobile browsers and the Google app (iOS and Android). The caption displayed with an image will be pulled from the title of the page that features the image.

As shown in the image below, the caption will be shown below the image and above the page URL.

Google Images: without captions / with captions

From the announcement:

This extra piece of information gives you more context so you can easily find out what the image is about and whether the website would contain more relevant content for your needs.

When asked if these titles might be rewritten by Google for display with images, a Google spokesperson said, “We use web titles right now, but we’re continually experimenting with ways to improve the experience.” I also asked if Google might at times use captions for the images from the publisher’s page instead of the title tag content and was advised, “Currently we use the web page’s title and nothing more.”

In discussing examples that, while they might be outliers, will likely exist, I inquired about relevancy between the page title tag and the image itself. I asked if quality issues (from the user perspective) arise from any disparity. I used a fictional example — say, there’s a page titled, “The best baby shoes ever” that includes pictures of baby shoes, but also a photo of the author’s Labrador retriever. A person searching “labrador retriever” and getting “The best baby shoes ever” title with the labrador image may assume there’s something wrong with the results.

In regard to this type of scenario, I asked whether any validation was being done between the evaluated content of the image and the content of the page title that would prevent the example above from being returned in a “labrador retriever” Google Images search. The spokesperson replied: “No changes to ranking for this launch. We already use a variety of signals from the landing page to help deliver the most relevant results possible for users.”

Again, this change is global, but only for mobile Google Images searches — via mobile browsers or the Google App. Read the full announcement here.

Source: Google Images update: Captions added to images, pulled from the page title tag – Search Engine Land

28- Feb2016
Posted By: Guardian Owl
146 Views

GOOGLE develops a deep learning neural network program

developes a deep . Thats a useful . The program name is and uses earning neural.

Thanks to Google, a new artificial intelligence systemis outperforming humans in spotting the origins of images.

Google has unveiled a new system to identify where photos are taken. The task, simple when images contain famous landmarks or unique architecture, goes beyond the overt to examine small clues hidden in the pixels.

The program, named PlaNet, uses a deep-learning neural network, which means the more images PlaNet sees, the smarter it gets.

PlaNet is able to localize 3.6% of the images at street-level accuracy and 10.1% at city-level accuracy. 28.4% of the photos are correctly localized at country level and 48.0% at continent level,” wrote the research team.

That’s still a long way from a reliable level of accuracy – but PlaNet already outperforms even the most well-traveled humans.

To compare PlaNet to human accuracy, the researchers matched their program against 10 well-traveled people in the game Geoguessr, a game providing a random street-view photo and requiring players to identify where they believe the photo was taken.

PlaNet and its human challengers played 50 rounds in total.

“PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” according to the paper.

Other computer programs are tackling image location as well. Im2GPS has achieved high accuracy by relying on image retrieval to identify location. For example, if im2GPS was trying to identify where a picture of a forest was taken, it would browse the internet’s millions of forest photos. When it found one that looked almost identical, it would conclude they were taken in the same place. With enough data, this method can achieve high accuracy, according to the paper.

The researchers trained the neural network using 29.7 million public photos from Google+. The neural network relies on clues and features from photos it has already seen to help identify the most likely whereabouts of a new image.

The program has some limitations. Because it depends on internet images, PlaNet is at a disadvantage when confronted with rural countrysides and other rarely photographed locales. The team also left out large swaths of the Earth, including oceans and the polar caps.

Tobias Weyland, the lead author on the project, noted that supplementing internet photos with satellite images resolved some of these weaknesses. PlaNet also focuses on landscapes and other factors besides landmarks, making it more accurate at identifying non-city images than other programs.

Source: GOOGLE develops a deep learning neural network program