Not entirely true, the big change was multi-headed attention and the transformer model.
It’s not just being used for language but anything where sequence and context patterns are really important. Some stuff is still using convolutional networks and RNNs etc. but transformers aren’t just for LLMs. There’s definitely a lot of algorithmic advances driving the wave of new ai implementations, not just hardware improvements.
Agreed. It’s a lot of the same tech that powers both, but it’s not like a self driving car contains a language model that’s fine tuned on the adventures of Steve MacQueen or something.