@SW-User regardless of the version there is something called a max token limit. This is how much of the historical conversation it can "keep in memory" during a conversation. So even with the new model it will do this eventually, it just takes longer. When it runs past the token limit it just forgets the oldest stuff.
Getting it summarize things you've talked about every now and then can help keep a rough outline in its memory though it will still lose details. It does help some with cohesion.