If I understand it correctly, it does not actually work like this in any reasonable runtime environment / operating system. As an example, consider doing java.io.InputStream.read() on a socket in Java (or any other language on the JVM) on Linux. If there is not data available in the buffer, this call will block until some packet is received by the network interface, which may take several seconds. During that time, the JVM Thread is blocked, but the JVM and/or Linux kernel will reschedule the CPU core to another thread or process and will wait for the network interface to issue an interrupt when some packet is received. You will waste some time on thread scheduling overhead and user/kernel mode switching, but that’s typically much less than the I/O waiting time.
I am quite sure it will work the same way on other runtime environments (such as .Net) and operating systems (such as Microsoft Windows, Solaris, Mac OS X).
There are other reasons why blocking I/O can be problematic, and the Reactive principles are useful in many cases. But please be honest and don’t portray blocking I/O worse than it actually is.