After reading some documentation about how bad it can be sometimes to have Nagle algorithm enabled on a socket, which is used to connect to an Oracle database, I was trying to find whether or not my Oracle client library was doing it by default. After some research on the Internet I could find pretty much nothing about how to find it out. The best I could find were some articles about what Nagle algorithm is.
After a couple of hours of playing around I realized that there is no such command I can run to see this flag on an established socket.
So, basically without having access to the source code, it seemed impossible.
Netstat wouldn't show it,
Tcpdump is useless because it only gives you the packets but not the actual socket flag.
After researching some more, I found that this socket flag (and a bunch of others) are set with a system call
getsockname(). Since it's a system call, a call tracing utility can be used to find the details(strace on Linux, ktrace/kdump on
FreeBSD).
Sure enough, when the connection is about to be established I saw the following:
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4
connect(4, {sin_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("[some_ip_address]")}}, 16) = 0
getsockname(4, {sin_family=AF_INET, sin_port=htons(57957), sin_addr=inet_addr("[some_ip_address]")}}, [16]) = 0
setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
fcntl64(4, F_SETFD, FD_CLOEXEC) = 0
The third parameter that's passed to
setsockopt() is TCP_NODELAY, which is what I was trying to find out. I also verified it with other Oracle client library versions, version 8.1.7 is setting it as well as Oracle Instant client 10.2.