

Those “almost completely forgotten” characters were important when ASCII was invented, and a lot of that data is still around in some form or another. There’s also that, since they’re there, they’re still available for the use for which they were designed. You can be sure that someone would want to re-invent them if they weren’t already there.
Some operating systems did assign symbols to those characters anyway. MS-DOS being notable for this. Other standards also had code pages where different languages had different meanings for the byte ranges beyond ASCII. One language might have “é” in one place and another language in another. This caused problems.
Unicode is an extension of ASCII that covers all bases and has all the necessary symbols in fixed places.
That languages X, Y and Z don’t happen to have their alphabets in contiguous runs because they’re extended Latin is a problem, but not something that much can be done about.
It’s understandable that anyone would want their alphabet to be the base language, but one has to be or you end up in code page hell again. English happened to get there first.
If you want a fun exercise (for various interpretations of “fun”), design your own standard. Do you put the digits 0-9 as code points 0-9 or do you start with your preferred alphabet there? What about upper and lower case? Which goes first? Where do you put Chinese?
I see I’ve forgotten to put on my head net today. You know the one. Looks like a volleyball net. C shape. Attaches at the back. Catches things that go woosh.