Product Promotion Network

Red/

Using Your Smartphone In The Dark Risks Speeding Up Vision Loss

The blue light that emits from your smartphone and laptop screens may seem harmless, but according to new research, it can be toxic for your eyes. Earlier this week, scientists at the University of Toledo said they’ve uncovered how blue light can lead to macular degeneration, a leading cause of vision loss in the US. Essentially, the light waves contain enough energy to erode the health of your eyes over time.

“It’s no secret[1] that blue light harms our vision by damaging the eye’s retina. Our experiments explain how this happens,” said University of Toledo professor Ajith Karunarathne in a statement[2]. On the light spectrum, blue light has a shorter wavelength, and thus carries more energy than red, yellow or green light.

That extra energy is why blue light can be bad for your eyes. Too much exposure can trigger a toxic reaction that’ll kill the light-sensing photoreceptor cells in your retinas. “No activity is sparked with green, yellow or red light,” Karunarathne said, noting that the “retinal-generated toxicity” was caused only by blue light.

Another molecule in your retinas normally acts as an antioxidant to prevent eye cells from dying.

But as people grow older, their immune system will struggle to keep the cells healthy. As a result, a constant bombardment of blue light may very well speed up someone’s chances of developing macular degeneration. “Photoreceptor cells do not regenerate in the eye,” said Kasun Ratnayake, a PhD student researcher who also worked on the study. “When they’re dead, they’re dead for good.”

So, how can you protect yourself? Unfortunately, blue light can be hard to avoid. It can come from sunlight and from our smartphones and PCs, which often sit directly in front of our faces.

But the researchers say that people should be careful about using their electronics devices in the dark. Doing so can focus the blue light directly into your eyes. “That can actually intensify the light emitted from the device many, many fold,” Karunarathne told[3] Popular Science. “When you take a magnifying glass and hold it to the sun, you can see how intense the light at the focal point gets.

You can burn something.” People can also consider wearing sunglasses and other eyewear that’s designed to filter out blue light. In the meantime, Karunarathne is exploring whether an eye drop solution can be developed to counter the harmful effects.

The scientists detailed their findings in a study[4] published in Scientific Reports last month.

References

  1. ^ no secret (www.macular.org)
  2. ^ statement (utnews.utoledo.edu)
  3. ^ told (www.popsci.com)
  4. ^ study (www.nature.com)

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition.

The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95 percent confidence or higher, it’ll point to all the Waldos it can find on the page.

Google’s Cloud AutoML has been available since January to let users train their own AI tools without any previous coding knowledge.

The drag-and-drop tool lets anyone create an image recognition tool, which has a variety of use cases such as categorizing photos of ramen by the shops they came from. You can catch a glimpse of this process in the video above, in which different photos of Waldos are fed into the software.

Matt Reed, the Creative Technologist at Redpepper who shepherded the project, explained via email: “I got all of the Waldo training images from Google Image Search; 62 distinct Waldo heads and 45 Waldo heads plus body. I thought that wouldn’t be enough data to build a strong model but it gives surprisingly good predictions on Waldos that weren’t in the original training set.” Reed was inspired by Amazon Rekognition‘s ability to recognize celebrities, and wanted to experiment on a similar system which supported cartoons.

He had no prior experience with AutoML, and it took him about a week to code the robot in Python.

To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost!

But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

Reed listed a few more possible applications: “Maybe a fun use would be seeing what cartoon character the AI thinks you look closest to?

Maybe could detect comic book forgeries?”

Redpepper’s video description boasts: “While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds which is better than most 5 year olds.” If this is a competition, we really can’t win against the machines.

This robot uses AI to find Waldo, thereby ruining Where’s Waldo

If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.

Built by creative agency Redpepper, There’s Waldo zeroes in and finds Waldo with a sniper-like accuracy. The metal robotic arm is a Raspberry Pi-controlled uArm Swift Pro which is equipped with a Vision Camera Kit that allows for facial recognition.

The camera takes a photo of the page, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML Vision service, which has been trained on photos of Waldo. If the robot determines a match with 95 percent confidence or higher, it’ll point to all the Waldos it can find on the page.

Google’s Cloud AutoML has been available since January to let users train their own AI tools without any previous coding knowledge.

The drag-and-drop tool lets anyone create an image recognition tool, which has a variety of use cases such as categorizing photos of ramen by the shops they came from. You can catch a glimpse of this process in the video above, in which different photos of Waldos are fed into the software.

Matt Reed, the Creative Technologist at Redpepper who shepherded the project, explained via email: “I got all of the Waldo training images from Google Image Search; 62 distinct Waldo heads and 45 Waldo heads plus body. I thought that wouldn’t be enough data to build a strong model but it gives surprisingly good predictions on Waldos that weren’t in the original training set.” Reed was inspired by Amazon Rekognition‘s ability to recognize celebrities, and wanted to experiment on a similar system which supported cartoons.

He had no prior experience with AutoML, and it took him about a week to code the robot in Python.

To me, this is like the equivalent of cheating on your math homework by looking for the answers at the back of your textbook. Or worse, like getting a hand-me-down copy of Where’s Waldo and when you open the book, you find that your older cousin has already circled the Waldos in red marker. It’s about the journey, not the destination — the process of methodically scanning pages with your eyes is entirely lost!

But of course, no one is actually going to use this robot to take the fun out of Where’s Waldo, it’s just a demonstration of what AutoML can do.

Reed listed a few more possible applications: “Maybe a fun use would be seeing what cartoon character the AI thinks you look closest to?

Maybe could detect comic book forgeries?”

Redpepper’s video description boasts: “While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds which is better than most 5 year olds.” If this is a competition, we really can’t win against the machines.