why i receive zero in my socket

I have two programs, one written in c++, the other is in java. C++ keep sending bytes to java, overtime write out 400KB. Java keep receiving the data using the readBytes function, this function returns an integer which is the number of byte it actually read for the function call. For every send (400kb) in c++ and every read in java, I print out something, so I know c++ is 2 times faster than Java. That mean c++ is sending out the bytes in 2-times speed than java. After 10 seconds, on java side I read out zero from the byte array. But the return value of readBytes() function is still a positive number. Why behave like this? Java should return -1 if no packet or packet lost.

To fix this problem, for every 400kb read from java, I write out the string “end” to c++, c++ have to receive that “end” string before send out another 400kb. So c++ and java are doing handshaking, they align they send speed and write speed, preventing the c++ is sending faster than java. Now, I get all the bytes correctly. So in http, server side and client side *MUST* do handshake. (Correct me if I am wrong)

If the wget is doing handshake to apache server, even in a very slow network, we are very hard to have packet lost. Unless the bandwidth still can’t send out the bytes array in ONE single send() function. But in real case, we usually split the data into very small piece, let say 4KB. So we are very very hard to see packet lost. This explains why I can’t see packet lost/ECN, even I use wget to use up the whole bandwidth.

Leave a Reply

Your email address will not be published. Required fields are marked *