- #71
Rive
Science Advisor
- 3,140
- 2,625
The AI applications which brought up the issue are about on the fly decisions in a complex (hard or can't be algorithmized) environment (often messed up with human or other irregular interaction/intervention), like: piloting cars.russ_watters said:Are you trying to say that the current definition/status of AI is too broad/soft, so current experiences are not relevant?
Above a complexity once you can no longer properly formulate the question then you can't design a test to thoroughly validate your product. Software 'industry' long knows that: even if things started from the most strict requirements and guarantees, that's long past for most. Most of the things sold comes with no warranties. None at all. (And this area is still considered deterministic )
So, as about complex software: either you drop the idea of 'self' driving cars (products with similar level of complexity/generality), or you drop the hard requirements.
Given that by hard requirements no human would ever qualify to drive a car I assume sooner or later there will be some kind of compromise between hard requirements and real life statistics.
Of course, different compromises for different businesses. I'm perfectly happy to have medical instruments/applications having some AIs 'the hard way'.
Although I expect some accidents/cases to happen in the following decades which retrospectively may be characterized as preliminary conscience or something like that from the far future (let's give that much for Sci-Fi ) I think it'll be just statistics and (mandatory) insurances (as in monetary sense) in this round.russ_watters said:And under a narrower/harder definition things might be different? If so, sure, but there will be a clear-cut marker even if not a clear-cut threshold for when the shift in liability happens: legal personhood.
Last edited: