They would have been eventually, and possibly already had been, and just not public knowledge.
I’ve had a feeling for a while that rather than finding true 0 days, the AI is really finding I tentional backdoors that the company and the government already know about.
That being said, humans are great pattern recognition machines, and we tend to prefer a false positive to finding an error. That’s why typos in the middle of a word are unoticable, and people zone out on long highway drives.
We want everything to move smoothly, and we offload anything repetive to our subconscious. So we’re dogshit at noticing small errors. That’s why people recommend proofreading backwards.
It’s innately human, when looking at “working memory” chimps absolutely destroy us.
The AI never “zones out”. It can’t create stuff like a human can, which is a negative. But it doesn’t check for errors like a human, which is a benefit
Just in general, we shouldn’t be trying to make AI that replaces a human, we need it for shit like this where humans aren’t good at it.
So why were they not spotted by a human?
Attention is all you need.
I don’t think most people understand that’s the title of the paper which spawned all of this…
barely pay my mortgage, now I have to pay attention too?!
They would have been eventually, and possibly already had been, and just not public knowledge.
I’ve had a feeling for a while that rather than finding true 0 days, the AI is really finding I tentional backdoors that the company and the government already know about.
That being said, humans are great pattern recognition machines, and we tend to prefer a false positive to finding an error. That’s why typos in the middle of a word are unoticable, and people zone out on long highway drives.
We want everything to move smoothly, and we offload anything repetive to our subconscious. So we’re dogshit at noticing small errors. That’s why people recommend proofreading backwards.
It’s innately human, when looking at “working memory” chimps absolutely destroy us.
The AI never “zones out”. It can’t create stuff like a human can, which is a negative. But it doesn’t check for errors like a human, which is a benefit
Just in general, we shouldn’t be trying to make AI that replaces a human, we need it for shit like this where humans aren’t good at it.
We need cyborgs, not androids.
I see what you did there.