Doug, the answer goes back in history. In the earliest computers there was no solid state memory. Memory was wired in a very complex process of threading very small wires through magnetic loops called toroids. Here is a link to Wikipedia for a picture of a 1Kbit module:
https://en.wikipedia.org/wiki/Magnetic-core_memory
One of the features of core memory, as it was called, was that the individual core toroids, when read, were zeroed out and had to be re-written by a hardware process to put the core back to the state it was when read. Also, the toroid remembered the last state, so at boot up, the entire core needed to be zeroed out to be ready for it to be used. In the early days, there was no electronic long term storage for data, so the information was punched into cards for long term storage. You started the program in the computer and then fed the data cards into the job for the data to be processed and for answer cards to be punched at the end of the process, or the answer to be printed on a printer. When the computer was shut down, the core was stuck in whatever state it was when the last job ended, needing a reset at the boot process. Actually, in practice you NEVER shut the computer down, you just kept feeding a new job to it as soon as it finished the current one. One of the functions of loading that job was to zero out core memory to prepare for the new data to be written to it.
Eventually a magnetic media memory was developed that had long term storage on magnetic tape first and then drums, and then on disks. Tape data had to be accessed sequentially, as the tape passed through the machine, but drums and disks gave the ability to find data anywhere at any time. Quite an accomplishment. The data in the core memory could be accessed anywhere in the core at any time and was therefore called Random Access Memory, or RAM. Data density was very low, compared to today. I managed a data center that had the state-of-the-art (at the time) storage units that could hold about 350,000 bytes of information. We thought that was awsome! And the biggest computer in the data center had a massive 2 megabytes of internal core memory, and then an additional external memory unit with another 2 megabytes core in a box about the size of a desk (all water-cooled, of course).
Even back then, the memory in core was much faster than the other storage areas, so the central processor could operate at a high speed. Storage data was slower, even on drums and disks, but faster than tapes and punched cards.
OK, back in the time machine and back to today.
RAM is very, very fast non-persistent memory that is optimized for speed. The bus speeds into and out of RAM is some of the fastest parts of the entire computer. The upside is that RAM is fast, but the downside is that speed costs significantly more to manufacture and the memory is volatile: Remove power and it all goes away, poof! In fact, part of the internal operation of the RAM chips is that what is there is rewritten before it can fade. There is an entire portion of the chip dedicated to refreshing the bits over and over and over, millions of times every second. So, high speed, expensive and ephemeral.
The memory chips in an SSD are not as fast, but they are persistent. So you can store stuff on them and completely disconnect all power and the data will still be there when you put power back on and read it (Up to a limit, entropy will eventually cause the data to fade, but for our purposes, it's permanent until erased). But the downside is that it's much slower than the RAM and therefore the bus speeds are much lower to read/write with them. Cost is lower, so the manufacturers can put a LOT of it in your computer for not so much money. Bottom line? Slow, cheap and persistent.
To your question then. COULD it be done? Yeah, probably, although for the engineering reasons about booting already given you will still have to have some RAM, but could an SSD replace most of the current RAM? Yes, theoretically it could, but you would find the machine much slower because of the slow nature of the memory in the SSD. And given that what users want is speed, speed and more speed, any manufacturer who tried to build a no/low RAM system by using an SSD for RAM would find very few buyers.
So, that's it. It's based on engineering and it is based, partly, on cost. One could theoretically build a machine with all RAM for the speed and no SSD for storage at all, but then you would have to have a super-redundant power supply because everything would be in volatile memory and you could lose everything if the power died. So you'd have a super expensive, super sensitive and super fragile computer that demands power be so reliable that it never, even for 0.01 second, is not there.
Or you can build an all SSD slow machine.
Or, as they have done, you find a compromise of RAM and SSD that meets the user needs. And that is what the manufacturers do--try to find a "sweet spot" where cost and performance are attractive. The goal is to avoid using the slower SSD for any memory, which is why we say to folks who see a slowdown to see if they are seeing swap space in Activity Monitor. That swap is stuff that cannot fit into RAM, so it has to be written out to the SSD until it's needed. Doing that can really slow down the entire computer.
Hope that helps some.