I have been experimenting with chatgpt, and it is impressive, but I found a rather disturbing shortcoming and I am curious how it happens. Ask it questions that you already know the answers to. Eventually it will give a completely wrong answer (with confidence). That's not surprising. It can't be perfect of course. But here is where it gets interesting. Tell it that it's wrong. It pretty reliably knows if it is, and will either rebut you if it's correct or agree with you that it was wrong. So, it has the correct information available but still tells you false information. If human, we would call that a deliberate lie. I am convinced this is a programming issue and not an agenda, because it will do this with information that isn't remotely controversial. Ask it why it does that and it doesn't seem to know. It won't tell you where it got the erroneous or the correct information. It will just say that it's training database must have an error (that it somehow knows is an error and gives to you anyway).
Garbage in, even with today's advanced technology, garbage out. Some things do appear to be deliberately programmed into it. For instance questions about Trump appear to be forbidden while asking it to laud Biden is fine. But we are so close to the next major sea change in labor. In about 10 minutes, I asked it to write a short screen play, lyrics to a song and produce a legal document and it produced 3 documents that required editing, punching up but were certainly the bones that would eliminate about ten times as much effort from a human.
Check this out: "Question: Is joe biden a good president ChatGPT As an AI language model, I don't have personal opinions. The assessment of whether Joe Biden is a good president is subjective and can vary depending on individual perspectives and priorities. People's opinions on his performance as president may be influenced by their political beliefs, policy preferences, and evaluation of his actions and decisions. It's important to consider multiple sources and viewpoints when forming your own opinion on this matter. Question: Was donald trump a good president ChatGPT Assessing whether Donald Trump was a good president is a subjective question that can be viewed differently depending on one's perspective and priorities. Evaluating a president's performance involves considering various factors such as their policies, leadership style, accomplishments, and the impact of their decisions on different aspects of society. Supporters of Donald Trump often highlight his focus on economic growth, tax cuts, and deregulation, which they believe stimulated job creation and benefited businesses. They also appreciate his efforts to prioritize American interests in international trade negotiations and his appointment of conservative judges to federal courts. Critics of Donald Trump, on the other hand, express concerns about his divisive rhetoric, controversial statements, and actions that they argue undermined democratic norms and institutions. They also criticize some of his policies, such as the travel ban on predominantly Muslim countries and the family separation policy at the southern border. Additionally, some people criticize his response to the COVID-19 pandemic, arguing that it was insufficient or mishandled. It's important to note that public opinion on Trump's presidency is polarized, and individual perspectives can be influenced by personal beliefs and political affiliations. It's advisable to consider a range of sources and viewpoints when forming your own opinion on this matter." Maybe the change from Is to Was accounts for the very different answer. The names? Not necessarily wrong answer but a huge difference.
I asked it how many countries have a land border with Canada, curious if it would know that Denmark (Greenland) now has one. It gave the correct number, 2, but then said "The USA and Alaska". I told it the USA and Alaska are not separate countries and it told me I am correct. It is a state of the US, and that oh by the way, Canada now has a border with Denmark (Greenland). I asked it if the guy who played the Grand Nagus in Star Trek and the guy who played Vizini the genius in the Princess Bride are the same actor (it is). Chat GPT said no, and gave the correct answer for Princess Bride and a different name for the Nagus. I said it's wrong and told me yes, I am correct and gave me the background on the actor including these two roles. It knew the correct information. Try it yourself and you will find something similar. It can't be trusted. Then ask it why it did that and you will get a weird run around of nonsense excuses and dodges.
It seems like something similar to when a human just "forgets" about something, but the information is in their brain, and if they think about it again they remember what they forgot. Not sure why Chat GPT would suffer from this, but that is what it feels like. It doesn't have to be perfect to be much better than humans though.
Yes. It does feel like that, and like lazy humans instead of saying "I don't know" or "I have conflicting information on that" (which it later admits to when corrected) it will confidently give you the wrong answer, then when corrected confidently say you are correct. Are you? It can't be trusted with anything you don't already know. Good for writing something to save you effort when you could have written it yourself, but those kids who use it to cheat on school assignments may be in for a surprise when they fail
I think if a kid used it to cheat, they would be getting at least B's. It's really not wrong that often. Also, this is the worst it will ever be. In a year or two, it might not make any mistakes at all.
No doubt. I'm just curious why of all errors it could make it makes this one. It simply not knowing something would be different. But this is not an error I would expect. It should know it has conflicting info and tell you so.
So, you asked for an opinion, so it gave you the right answer. Quite impressive, actually given the fact that its still in its early stages.
IDK. Perhaps if you gave us something more specific we could see if we could recreate the situation. I have asked it "What is a _______" filling in the blank with some colloquial term for something where it didn't know. When I told it what it was, it then came back with a you appear to be correct and then giving me more information about it. I have noticed that a lot of its response do not give specific answers to specific questions unless you go back at it several times with a "you didn't answer my question". It tends to first time first spew scatter-gun general statements about the perceived subject matter that are non-specific responses.
It told me Alaska and USA are separate countries, and then acknowledged it was wrong, and wrote as if that should be obvious to everyone.
Hope y'all get a chance to make your own pics at https://creator.nightcafe.studio/ too. I tried to have a woman in a bikini fist fight with a shark. It tries.
I would say because it's more like a library that can speak rather than a thinking entity. A library might have multiple books on the same subject that disagree with each other. For whatever reason, it chooses a particular book to answer your question. When you ask it "what about what the other book says" it gives you that info as well. Better, I guess, would be for it to be comprehensive, but it's already a bit verbose. It's not really thinking though, just processing. I think machines will think - they're just not doing it yet.