Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose it gets into philosophical debates about AI, but I don't see any reason in principle that we can't describe at least some kinds of computerized systems as exercising something described as "judgment". We already have one existence proof, the human brain, of a system that can exercise something we call "judgment", and I don't see a strong reason to believe that it's due to anything magical about the human brain in particular (like a soul or something along those lines), rather than just being a complex reasoning system that's able to balance many contextual factors.


Automated systems as they currently exist have fixed responses to fixed inputs. If an automatic system encounters a set of inputs it wasn't programmed for, it has no capacity to determine a best course of action for those inputs. Depending on how it was programmed, it will either keep doing what it was doing previously, or switch to a pre-programmed "contingency plan" that hopefully will result in a tolerable outcome (but which might result in catastrophe), or possibly execute a random set of instructions (which can happen to poorly-designed state machines, for example).

A human being, on the other hand, has the capacity, when faced with unexpected or unfamiliar conditions, to exercise something we call "judgement" in an attempt to develop an appropriate response. I'm not saying that it's impossible for an automated system to have this capacity, I'm saying that no current automated systems have it, and that we're nowhere near to developing such a system any time soon.


That's definitely the case with deployed civilian aircraft systems, but I'd be surprised if there wasn't some unmanned system somewhere doing more complex reasoning. There was a talk years ago at IJCAI from some people from NASA Ames on a prototype aircraft-control system they'd built that used a reasoning system to assist with performing emergency landings in situations with no preprogrammed contingency, by taking into account some telemetry information (e.g. aircraft damage), map information, an aerodynamic model, and risk models.

I do believe they were planning to deploy it as a suggestion system though, which would suggest a course of action, and then leave it to the pilot to implement it or not. Then the judgment gets more murky; now the system is doing some of the judgment (evaluation of alternatives, etc.) that a human pilot would normally do, but leaving some of the judgment (accept the suggestion, modify it) still to the human.

edit: Here's a more recent paper than the one I'm thinking of, but must be the same project: http://ti.arc.nasa.gov/m/profile/de2smith/publications/IAAI0...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: