Typical loopback bug for a whois service

31 Dec

I was checking some whois data this morning and the whois service responded that the query limit has exceeded from my IP. The IP address quoted is 128.121.xx.xx. I almost ignored it, but later realized that the IP is not among the familiar web proxies that either me or my service provider uses. So I thought it might be due to a MIM attack. Curiously, I fired off a traceroute to that IP and surprised to find that the IP is of the whois service I used. Here is the (edited) log.

bash-3.00$ /usr/sbin/traceroute 128.121.xx.xx
traceroute to 128.121.xx.xx (128.121.xx.xx), 30 hops max, 40 byte packets
1 192.168.xx.xx (192.168.xx.xx) 0.753 ms 0.680 ms 0.560 ms
2 59.93.xx.xx (59.93.xx.xx) 13.854 ms 10.931 ms 11.750 ms
3 218.248.xx.xx (218.248.xx.xx) 27.348 ms 26.016 ms 26.512 ms
4 218.248.xx.xx (218.248.xx.xx) 26.609 ms 27.149 ms 28.772 ms
5 202.54.xx.xx (202.54.xx.xx) 57.882 ms 56.397 ms 58.348 ms
6 59.163.xx.xx.static.xx.xx.xx (59.163.xx.xx) 321.756 ms 323.790 ms 324.200 ms
7 if-1-2.core2.PDI-PaloAlto.xx.xx (64.86.xx.xx) 288.233 ms 288.030 ms 288.150 ms
8 if-5-0.core2.SQN-SanJose.xx.xx (64.86.xx.xx) 423.789 ms 288.990 ms 404.931 ms
9 if-5-0-0.core4.SQN-SanJose.xx.xx (66.198.xx.xx) 288.133 ms 287.520 ms 286.458 ms
10 ix-5-1-1.core4.SQN-SanJose.xx.xx (216.6.xx.xx) 314.275 ms 314.397 ms 314.642 ms
11 ae-0.r20.snjsca04.us.bb.xx.xx.xx (129.250.xx.xx) 316.360 ms 317.265 ms 315.969 ms
12 * xe-1-3.r02.mlpsca01.us.bb.xx.xx.xx (129.xx.xx.xx) 435.289 ms 315.773 ms
13 ae-0.r00.mlpsca01.us.wh.xx.xx (129.250.xx.xx) 314.953 ms 316.602 ms 316.526 ms
14 ge-0-1.a0441a.mlpsca01.us.wh.xx.xx (128.121.xx.xx) 314.410 ms 313.703 ms 313.022 ms
15 xx.xx (128.121.xx.xx) 315.390 ms 316.097 ms 315.946 ms

So the who is service took the request on its own behalf and validated for query limit. A classic loopback bug!

BTW, the above log shows the route through its San Jose based data center, but the first log I got (which I didn’t save) gave the route through its New York data center. So the web service might be load balanced to multiple locations, where one of the servers might be running the service with this loopback bug. That explains the recent correct behavior of the service.

In general, in a virtualized data center scenario, reproducing a problem using synthetic transactions (never mind if this sounds greek and latin) is impractical. Timestamp based log analysis might yield better results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.