Comments

You must log in or register to comment.

moonlune OP wrote

Large language models such as gpt3-4 can be thought of as to have good Intuition. Every idea they know is a statistical weight linked to other weights, and answering a question is more or less a curve fitting exercise, or pushing a vector through a matrix. It's a 1 step reflexion.

This, and the "self- reflection" concept (paper came out last week, it improves the output of all LLM by around 20%), brings planning and memories and multiple step reflexion to the godlike intuition of current LLMs.

3

roanoke9 wrote

I think it would be weird if we accidentally made an ai that could do what the human mind does when we don't actually know what the human mind does. The only reason it is even being debated is that the analogy of human mind acting like a computer has become so pervasive that making an ai that acts like the computer-like heuristic we currently pretend is how the human mind works is considered to be a computer that acts like an actual human mind. Even the mind/body division is a heuristic that is commonly accepted as objective reality ... ever get the feeling that the structure of language itself is pitted against your attempts to transfer an idea?

2