I don't get it. Why are people using LLMs without double checking? I treat it like a dumb assistant that needs a double check before finalizing. Even though I have to double check, it's still very helpful given how quickly it can produce an answer.
How does this keep happening? If I know it keeps happening and pisses ofc judges and I don’t work in the profession how bad do you have to be to not know you can’t do that?
It happens because the people who do it either think they can get away with it or are too uneducated to understand the confidence and accuracy of the output. Until the consequences are serious enough, slop will be slung [1]. It's too easy and mostly consequence free (until caught, penalties applied) otherwise.
I don't get it. Why are people using LLMs without double checking? I treat it like a dumb assistant that needs a double check before finalizing. Even though I have to double check, it's still very helpful given how quickly it can produce an answer.
It amazes me that you could use AI more than a few times and not realize you need to double check.
But then what? They used it for the first time???
How does this keep happening? If I know it keeps happening and pisses ofc judges and I don’t work in the profession how bad do you have to be to not know you can’t do that?
This has been made fun of on late night shows.
It happens because the people who do it either think they can get away with it or are too uneducated to understand the confidence and accuracy of the output. Until the consequences are serious enough, slop will be slung [1]. It's too easy and mostly consequence free (until caught, penalties applied) otherwise.
[1] https://hn.algolia.com/?q=slop
Order to show cause: https://storage.courtlistener.com/recap/gov.uscourts.cod.215...
Wow, Mike Pillow just can't do anything right. He's been snakebit since November 2020.
I'm not a lawyer, but that order sounds ominous.