When generative AI can’t even answer simple questions correctly, it feels like maybe it’s too imprecise a tool to trust with this level of necessary precision.
I don’t think this would be generative AI though. Machine learning is probably a better fit - training a model based on recordings of human air traffic controllers.
The problem is that AI companies are not responsible actors, even in when it comes to working with the government. We saw that with the use of Claude in Iran.
They will surely claim they’ve acted responsibly in building whatever product they’ll roll out in front of DOT, but I’ve seen more than enough shitty, cobbled-together product from them to believe that they’ll take shortcuts and fail to do the work needed to make it function properly.
I don’t think this would be generative AI though. Machine learning is probably a better fit - training a model based on recordings of human air traffic controllers.
I agree.
The problem is that AI companies are not responsible actors, even in when it comes to working with the government. We saw that with the use of Claude in Iran.
They will surely claim they’ve acted responsibly in building whatever product they’ll roll out in front of DOT, but I’ve seen more than enough shitty, cobbled-together product from them to believe that they’ll take shortcuts and fail to do the work needed to make it function properly.