Would you trust an AI to be your eyes? (Ep. 437)
The home team is joined by friend of the show Adam Lear, a staff software engineer on the public platform at Stack Overflow, to discuss AR glasses that help the blind navigate IRL and how we might reimagine cities for remote work.
Episode notes:
The crew has complicated feelings about products like Apple’s augmented reality glasses and Google Glass. Ceora put it best: “I’m very cautious about any big tech company having any more access to my perception of reality.”
On the other hand, products like Envision smart glasses that help visually-impaired people navigate their environments exemplify how AR technology can enable accessibility and empower users.
Speaking of different perceptions of reality, New York mayor Eric Adams dusts off that old chestnut about how remote workers “can’t stay home in your pajamas all day.” (Watch us.)
Matt recommends Oh My Git!, an open-source game that teaches Git. Ceora recommends Popsy, which allows you to turn your Notion pages into a website for free.
And some recommended reading: How to make the most out of a mentoring relationship from the GitHub blog and How to use the STAR method to ace your job interview from The Muse.
Today’s Lifeboat badge goes to user metadept for their answer to Generate a two-digit positive random number in JavaScript.
Find Adam on LinkedIn here.
Tags: the stack overflow podcast
6 Comments
If one looks at society and technology over the last 4 decades or so…. I think (no reference, just things I’ve read, and am interested)
It seems quite clear, as I say – IMHO – that as technology increases, the loss of personal privacy *seems* to me to be almost the same line. The more tech, the higher, better tech… the less privacy… It’s like… it seems inevitable… tech will keep going up… Whereas, you can only lose so much privacy.. so we’re going to hit the lack of privacy limit. Not long.
Google glass didn’t catch on because then nobody wants to be near you, in other words recorded audio & video live all the time.
If I had $10 billion.. and.. the *really* hard part.. a group of top AI (real – the rest it still really just Expert Systems.. just better and built different) .. ok.. maybe even a few people but 2-12 or so top AI R&D, different aspects, but about real AI. The thing that makes it harder than $10 Billion – is that every one of those people in the group have to be people that I trust (so much more than just) the lives of 7 billion people. I’d say AI within a decade and all these problems people talk about would go away. (if *I* write it… if one of the mega-billion $ Annual(!) budget.. writes a true AI first, it could be very scary. Way, way worse than just the extinction of humanity. Obviously.. if I write it first, nobody will die for lack of clean water, good food, housing… so many people have no idea.
I think I know the logistics of pretty much all of it, but it’s hard to find people to bounce off of…I have to trust them with 7 billion lives first.
(about the UV thing – if you use that by modifying a camera, you can see through many peoples clothes.. not *totally* but pretty close and more so in some cases.. old stuff.. removing the IV filter or some such thing)
The current state of Artificial Intelligence is quite fast to identify objects in an image, but nowhere near as fast as the human eyes and brain. If AI got to a point in which it could identify an object, its surroundings, and its range to you, I would say that yes, I would be willing to allow an AI to be my eyes
“Would you trust an AI to be your eyes?”
Yes. In fact, I did – I got LASIK. 😅
I don’t know what I’m doing and what’s going on or what I’m supposed to do I tried giving someone trust by opening up to them because I originally thought I could trust them but then they have let me destroy my life in a year while I’ve begged them to help me. I’ve been ignored and have no clue what I’m doing.
It looks like the AI is inevitable, but along with it we see that no appropriate legislation or even moral code is created for this highly explosive field.
To answer the question in the article (or any other “should we..” question regarding AI) we have to define two constraints:
1. We should always have to be able to stop/close/disable it.
2. We should limit it’s abilities to specific functionalities, and never mix between several AI’s.
Unfortunately – as we are getting crazier by the minute – many countries even search a way to weaponize AI – something that IMHO opinion is a crime against humanity, and should be stopped immediately before something very bad happens.
Human-beings have no chance against AI, thus have to treat it with extreme caution, and certainly not as we approach it today.
Hopefully governments will wake-up and form a treaty to standardise all AI development licensing, rules and protocols- before it’s too late.
“Would you trust an AI to be your eyes?”
My white cane doesn’t need battery. Human echolocation doesn’t either. And they cannot have bug.