I was reading a report on how robo-advisors have resulted in 1T dollars under management in the U.S.A. This means the clients are self directing their investments under the guidance of an IVR phone prompts. People do this because they are embarrassed to ask an advisor for guidance.
In the same report it mentioned how people are now starting to ask ChatGPT for investing advice.
I use ChatGPT and other LLM (A.I.) to write program code. The AI can type faster than I can, and without spelling mistakes. However, it often creates code that depends on things that don’t exist. When asked for references, it will make up URLs (webpage links) that point to non-existent websites and pages. Even specifically telling the LLM to NOT fabricate these references it still will. I have to review, and often correct, the things that it generated. I spend about half my time planning and designing, and the other half fixing what the AI created.
Can you imagine the impact this could have on a person’s investments? Ask the AI to create an investment plan, and it will create a document. Will it contain stocks/companies that actually exist? Maybe. Will it tell you why it chose the ones it did? Sure. Can you trust that it didn’t generate out of thin air the supporting documents and therefore reason(s) to choose that company over another?
But even worse, when you call out an AI on it doing something, and you tell it why you think it is wrong, it will tell you that you are absolutely right. Regardless if your argument has merit or not. It will be your cheerleader when you head down a disastrous path because it wants your engagement, and we typically don’t engage with people who tell us things we don’t want to hear!
Do you really want the ultimate Yes! man to advise you on your investments?
