In their quest to promote Reactive, Typesafe is beating up a straw man by portraying blocking I/O in a particularly stupid way which is rarely (if ever) done in practice.
In a recent webinar, I found this slide which suggests that a blocking I/O operation will waste CPU time while waiting for the I/O to complete.
If I understand it correctly, it does not actually work like this in any reasonable runtime environment / operating system. As an example, consider doing java.io.InputStream.read() on a socket in Java (or any other language on the JVM) on Linux. If there is not data available in the buffer, this call will block until some packet is received by the network interface, which may take several seconds. During that time, the JVM Thread is blocked, but the JVM and/or Linux kernel will reschedule the CPU core to another thread or process and will wait for the network interface to issue an interrupt when some packet is received. You will waste some time on thread scheduling overhead and user/kernel mode switching, but that’s typically much less than the I/O waiting time.
I am quite sure it will work the same way on other runtime environments (such as .Net) and operating systems (such as Microsoft Windows, Solaris, Mac OS X).
There are other reasons why blocking I/O can be problematic, and the Reactive principles are useful in many cases. But please be honest and don’t portray blocking I/O worse than it actually is.
His explanation is a bit strange, alright. I’m not sure he actually is trying to say that CPU time will be wasted in the sense that the blocking I/O operation will use the CPU. I think his message is that the thread blocked by an I/O call isn’t an efficient usage of a limited resource (the pool of threads serving the actors)?