Phone Case of the Month is a monthly series in which we live with, and subsequently review, our time with a phone case. Phone cases are one of our only ways to express individuality with our smartphones, so what do our phone case choices say about us?
It’s been a while since I last posted a phone case review, and I’m sorry. But I’m thrilled to report that I’ve discovered an incredible phone case hack.
You might remember my last review, which involved cherry pom-poms. Today’s review is about all the things I can do with that one case because it of its back-mounted metal loop.
The case originally came with those cherries on the back that were attached with a key ring. The key ring eventually broke, though, and then the cherries fell off, so I basically was stuck with a case that had an ugly, cheap-looking metal loop on it.
But then, I moved apartments and unearthed a keychain I was given with a nail polish purchase.
Yeah, random. It features a bright pink pom-pom with a pink tassel and a bedazzled butterfly. I tried hooking it onto the phone, and sure enough, it filled the cherry pom-pom void in my heart.
After a month or so in my bag, though, the pink started to turn black with dirt. That wasn’t my favorite look!
Now, having just moved, I was in my local dollar store and saw a strange keychain made out of a bike chain. I think it might be designed for attaching to a wallet and then hanging from your belt loop?
I might try that out at some point. I bought it solely for my phone, and it looks extremely industrial. I love it.
I wear the loop around my wrist or carry it like a purse. Either way, it feels more like a fashion statement. I only wish the metal loop on the back of the case was silver so it matched.
I love this case because I can change it up without fully committing to a whole new design.
A keychain makes my case feel new, and this time around, I’m fully leaning into the hardware trend. Cherries felt good at one time, when spring was just happening and summer was on its way. Now, it’s been over 90 degrees for what feels like years, and all I can think about is fall.
I needed this bike chain in my life.
I don’t know when I’ll change this case out. It’s definitely falling apart a little bit; the sides are peeling up and even the “leather” is peeling, too. If I do end up with a new case, I’ll post about it, but for now, know that my cases and I are in a good place.
ORC-229 Color: Orchard Features: -Designed for living, made to last. -Oven, freezer, microwave and dishwasher safe. -Beautiful and practical tableware, designed for modern life. Material: -Stoneware. Pattern: -Solid color. Dimensions: Overall Height – Top to Bottom: -3.5″. Overall Width at Top – Side to Side: -6.7″. Overall Depth at Top – Front to Back: -6.7″. Overall Product Weight: -1.28 lbs.
When I walked around the exhibition floor at this week’s massive Black Hat cybersecurity conference in Las Vegas, I was struck by the number of companies boasting about how they are using machine learning and artificial intelligence to help make the world a safer place.
Recommended for You
But some experts worry vendors aren’t paying enough attention to the risks associated with relying heavily on these technologies. “What’s happening is a little concerning, and in some cases even dangerous,” warns Raffael Marty of security firm Forcepoint. The security industry’s hunger for algorithms is understandable. It’s facing a tsunami of cyberattacks just as the number of devices being hooked up to the internet is exploding.
At the same time, there’s a massive shortage of skilled cyber workers (see “Cybersecurity’s insidious new threat: workforce stress“). Using machine learning and AI to help automate threat detection and response can ease the burden on employees, and potentially help identify threats more efficiently than other software-driven approaches.
But Marty and some others speaking at Black Hat say plenty of firms are now rolling out machine-learning-based products because they feel they have to in order to get an audience with customers who have bought into the AI hype cycle. And there’s a danger that they will overlook ways in which the machine-learning algorithms could create a false sense of security.
Many products being rolled out involve “supervised learning,” which requires firms to choose and label data sets that algorithms are trained on–for instance, by tagging code that’s malware and code that is clean. Marty says that one risk is that in rushing to get their products to market, companies use training information that hasn’t been thoroughly scrubbed of anomalous data points. That could lead to the algorithm missing some attacks.
Another is that hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code. The bad guys don’t even need to tamper with the data; instead, they could work out the features of code that a model is using to flag malware and then remove these from their own malicious code so the algorithm doesn’t catch it.
One versus many
In a session at the conference, Holly Stewart and Jugal Parikh of Microsoft flagged the risk of overreliance on a single, master algorithm to drive a security system. The danger is that if that algorithm is compromised, there’s no other signal that would flag a problem with it.
To help guard against this, Microsoft’s Windows Defender threat protection service uses a diverse set of algorithms with different training data sets and features. So if one algorithm is hacked, the results from the others–assuming their integrity hasn’t been compromised too–will highlight the anomaly in the first model. Beyond these issues.
Forcepoint’s Marty notes that with some very complex algorithms it can be really difficult to work out why they actually spit out certain answers. This “explainability” issue can make it hard to assess what’s driving any anomalies that crop up (see “The dark secret at the heart of AI“). None of this means that AI and machine learning shouldn’t have an important role in a defensive arsenal.
The message from Marty and others is that it’s really important for security companies–and their customers–to monitor and minimize the risks associated with algorithmic models.
That’s no small challenge given that people with the ideal combination of deep expertise in cybersecurity and in data science are still as rare as a cool day in a Las Vegas summer.