New ask Hacker News story: Ask HN: Who'll take ownership of AI's mistakes?

Ask HN: Who'll take ownership of AI's mistakes?
2 by wg0 | 2 comments on Hacker News.
A lot many companies are being formed with AI automation of Enterprise spcae, lots of optimism around chained together AI agents autonomously working together without much human intervention. Genuine question - if traditional software makes mistake, its usually deterministic, debugable, fixable and a blame can be assigned. What's the deal with these autonomous AI agents? Let's say analysing customes paperwork to schedule some shipments from overseas and it fails to let a shipment in because it misclassified or worse, lets it it but being it on the shores under certain conditions leads to heavy financial penalties? Who's responsible? The AI prompt automation engineer? Or the underlying platform? Or the company providing model? If the answer is that each outcome of such model should be double checked by a human while going through all that paperwork than what's the point of having that automation in the first place? EDIT: typos

Comments