Product Promotion Network

photos

Trulia Neighborhoods lets you see crowdsourced local reviews of an area before you move in

It can be overwhelming to get a sense of what a neighborhood is like before moving there just by consulting multiple websites to piece together general information. To help, real estate website Trulia is launching Neighborhoods as a guide for buyers and renters. The tool features crowdsourced local reviews and photos to offer a better sense of a particular area, down to parent reviews of schools, insights on commute, and local safety.

Users can also read up on other intangible factors like vibe, noise levels, and local insights that are harder to research through Google.

Neighborhoods builds upon Trulia’s What Locals Say feature that launched earlier this year. Users can read resident insights like how much street parking is available, and whether a park is dog-friendly. So far, more than 15 million locals have submitted reviews and feedback.

The feature also builds on Trulia’s Local Legal Protections tool, which lets homebuyers know if their new home is in an area that has laws to prevent discrimination based on sexual orientation and gender identity.

Neighborhoods also has an “Inside the Neighborhood” feature, which uses the now ubiquitous stories format to display photos and information about parts of a city.

Trulia Neighborhoods is available nationally, and currently offers original photography and drone footage for 300 neighborhoods including San Francisco, Oakland, San Jose, Austin, and Chicago.

Trulia plans to add photos for 1,100 more neighborhoods throughout the end of this year.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition.

The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95 percent confidence or higher, it’ll point to all the Waldos it can find on the page.

Google’s Cloud AutoML has been available since January to let users train their own AI tools without any previous coding knowledge.

The drag-and-drop tool lets anyone create an image recognition tool, which has a variety of use cases such as categorizing photos of ramen by the shops they came from. You can catch a glimpse of this process in the video above, in which different photos of Waldos are fed into the software.

Matt Reed, the Creative Technologist at Redpepper who shepherded the project, explained via email: “I got all of the Waldo training images from Google Image Search; 62 distinct Waldo heads and 45 Waldo heads plus body. I thought that wouldn’t be enough data to build a strong model but it gives surprisingly good predictions on Waldos that weren’t in the original training set.” Reed was inspired by Amazon Rekognition‘s ability to recognize celebrities, and wanted to experiment on a similar system which supported cartoons.

He had no prior experience with AutoML, and it took him about a week to code the robot in Python.

To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost!

But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

Reed listed a few more possible applications: “Maybe a fun use would be seeing what cartoon character the AI thinks you look closest to?

Maybe could detect comic book forgeries?”

Redpepper’s video description boasts: “While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds which is better than most 5 year olds.” If this is a competition, we really can’t win against the machines.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition.

The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95 percent confidence or higher, it’ll point to all the Waldos it can find on the page.

Google’s Cloud AutoML has been available since January to let users train their own AI tools without any previous coding knowledge.

The drag-and-drop tool lets anyone create an image recognition tool, which has a variety of use cases such as categorizing photos of ramen by the shops they came from. You can catch a glimpse of this process in the video above, in which different photos of Waldos are fed into the software.

Matt Reed, the Creative Technologist at Redpepper who shepherded the project, explained via email: “I got all of the Waldo training images from Google Image Search; 62 distinct Waldo heads and 45 Waldo heads plus body. I thought that wouldn’t be enough data to build a strong model but it gives surprisingly good predictions on Waldos that weren’t in the original training set.” Reed was inspired by Amazon Rekognition‘s ability to recognize celebrities, and wanted to experiment on a similar system which supported cartoons.

He had no prior experience with AutoML, and it took him about a week to code the robot in Python.

To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost!

But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

Reed listed a few more possible applications: “Maybe a fun use would be seeing what cartoon character the AI thinks you look closest to?

Maybe could detect comic book forgeries?”

Redpepper’s video description boasts: “While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds which is better than most 5 year olds.” If this is a competition, we really can’t win against the machines.

1 2 3 452