it can work pretty well so long as you’re in control and you don’t take the result as truth.
But doesn’t this make the whole point null and void? Like obviously if you’re running it through and getting an output you do have to take elements of it as truth.
What I mean is that you have to be able to judge whether the output is correct. So you don’t take its truth at face value.
In my example, obviously correct input is filtered out, leaving only potential errors. It takes much less effort to upload a sheet and give criteria and instructions than to manually look through everything (though, granted, you can probably come pretty far with just ctrl+f too).
There are things LLMs are good at, but they’re just a tool like any other.
But doesn’t this make the whole point null and void? Like obviously if you’re running it through and getting an output you do have to take elements of it as truth.
What I mean is that you have to be able to judge whether the output is correct. So you don’t take its truth at face value.
In my example, obviously correct input is filtered out, leaving only potential errors. It takes much less effort to upload a sheet and give criteria and instructions than to manually look through everything (though, granted, you can probably come pretty far with just ctrl+f too).
There are things LLMs are good at, but they’re just a tool like any other.