

Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.
Even better, gorditas literally translate to little fat girls i believe (but maybe I’m wrong). Tbf, we have a processed sausage-alike that we call a hot dog so language and our perception of it is weird