Mar 252008
 

I’m setting up a shared database server in a data center and I don’t have any direct connections between the machines that are local (i.e. meaning they only connect between boxes and don’t let external traffic in), no firewalls, no routes or any networking goodies. These machines only have a single ethernet card that accepts connections from anywhere. So, my concern is that my new database server needs to allow the other servers in the cluster access to MySQL without opening it up to everyone in the world, which might allow hackers access. Instead, I want to lock things down so that only certain machines can connect to MySQL and everyone else is rejected.

In order to pull this off, I’m making use of iptables, which allow me to control how IP packets are handled by the kernel. There are loads of materials out there on iptables, so I won’t go into how it works exactly. Instead, I’ll just show you all how I did it. All these commands are run as root (via sudo or as root directly):

This allows access on port 3306 (MySQL default) to only two IP addresses and drops all other traffic on the floor. I can add as many IPs as I want by repeating the second command with a different IP.

Aug 292007
 

I received a reply to my Google post today. Since I get a lot of spam, I filter everything on this blog. Usually any comment that comes in I just make sure it isn’t porn or spam and approve it. This one was extremely interesting, but not very compelling, mostly because it was posted anonymously. This could mean it is a Google employee that didn’t want to be found or just some random person trolling (I do have his IP address however – hehe). Anyways, the comment is a personal attack, so at first I took offense and got defensive. But the more I thought about it the more I began to think about interviewing again. Here’s the comment:

Wow, this entire article is littered with hubris and cocky remarks, but the summary takes the cake. Why didn’t Google hire you? They were trying to best you in a battle of wits and you were too smart! Of course that must be it since you’re an algorithms God!

Seriously Bro, you need to check yourself. Your ego, not Google’s, probably cost you the position.

First things first… I want to clear the air about my post and this comment. The majority of my post has no cocky remarks or hubris. It’s just what happened, pretty simple. I agree that my post got somewhat more subjective near the end, but I was forced to draw conclusions because of the treatment I received. Just to give you perspective, the Google recruiter left me a voice mail telling me I was not selected because they “thought my coding skills were not very good”. You read that correctly, A VOICE MAIL. This was very unprofessional. My phone and Boulder interviews were great and very productive. However, my treatment in California and after was very poor.

As for being a God of algorithms, I clearly state I’m not. I had trouble with some of their questions and that’s the point. They want to challenge you, which I completely agree with.

As for egos and battles of wits, I disagree with the commenter and this is what I’ve been thinking about since I read this comment. Ego is an interesting beast. In an interview, there are a few of possibilities, two of which are:

1. The interviewee has an ego and think they are better than the interviewer or the company. In this case they usually just answer the questions and if they gets stumped it is my experience that they start arguing. In most cases the arguments are defensive and without any pragmatic basis.

2. The interviewer has an ego and think they are better than the interviewee. In this case the interviewer isn’t out to find good candidates. They are out to find the candidates that will make them feel smart. If you answer their questions well, try to create good dialogs or introduce any pragmatism, you’ll probably get the toss.

So, what happened with me? Well, I have no doubt that the dictionary question threw me off a bit. However, I was definitely excited about being able to interview at Google. Even though I didn’t know the answer, I was interested in trying to figure it out with a good dialog. My first reaction was that it might be an early optimization because it could be a complex solution. In truth the optimization isn’t complex, but I didn’t know the answer at first. So, I wanted to begin a dialog about it and try to work through it. This didn’t go over well. In the end the interviewer got annoyed and just gave me the answer after I asked for it (he was about to move on without finishing that question when I stopped him).

Next was the hiring error question. This question had very little information around it and I again was interested in solving it. I did what I normally did and asked some questions, trying to start a dialog about it. This again didn’t go over well. The interviewer actually tried to solve it. I’m not making that up, it actually happened. But he couldn’t. This really made me think that this interviewer had a lot of ego and access to a list of questions to pick from. He picked a few, but hadn’t actually solved them all. After the interview I talked a bit with my local mathematics wiz (2 master degrees and almost a PhD in case people think he’s just some quack I work with) just to see if the interviewer and I were missing something. He confirmed that the question didn’t make sense given the information and that the solution I posed was correct without more information and bounds. You have to reduce the error to fix things.

Lastly, my comments about the tag-along were completely subjective and editorial. I personally thought it was poor form to bring along someone who wasn’t going to participate in the interview. It was uncomfortable and made it difficult to concentrate. Was this ego, maybe. But probably just plain nerves.

Now, I think the commenter was specifically thrown off by my summary and once he had finished that, he forgot about the other stuff I wrote. This summary was completely editorial and just a plain old guess as to the end result. The reason I suggested that I was “grilled” because of my resume is that I’ve had a number of friends interview at Google. Many of them never got the questions I did and my phone interviewer even told me that his questions were the hardest that Google gave. So, why, when I did so well in three interviews prior, would everything fall apart in the end? How was it that when I called my friends at Google they were astonished that I was hired?

In honesty, I don’t know. So I have to hypothesize as to what happened in California. During both interviews I had in California I had a feeling that I wasn’t really a candidate. I wanted to get a sense for what life was like at the company. I asked the first interviewer if people went out for drinks or if there were company activities and his answer was, “I’ve got kids and I don’t do that”. Another interviewer took off part way through the interview. So, given my experience I drew out some conclusions. Were they accurate? Who knows, but I did warn readers that I had nothing to back it up. As for my points:

Have I worked on huge systems? You bet.

Are the systems I’ve built larger that the majority of other engineers? Yes. 3000-5000 servers, distributed, etc.

Do I think that I write solid code? Absolutely.

Do I think I could work at Google on huge systems? Yes.

For these reason, I wrote my summary and made a guess as to the result. This conclusion I came to was based on experience. Having been part of interviewing processes at a number of companies I have seen a few interviewers just clobber lesser candidates but hire them and pass over good candidates. Therefore, I believe that it is fundamentally important to ensure your interviewers are doing a good job and working in the best interest of the company. If they are allowing ego to interfere with making good selections, they shouldn’t be interviewing. Lastly, my summary only applies to my experience in California. My phone interview and Boulder interview were great. Not a single sign of ego during either.

Aug 272007
 

The phone screen

Question #1: How would you go about making a copy from a node in a directed graph?

This I had done before for Savant’s dependency tree. I had never distributed this type of operation, but since the operation is simple on a single machine distribution seems to be just a matter of dividing and then sharing of information via standard distributed algorithm operations like scatter, collect, etc.

Question #2: How would you write a cache that uses an LRU?

This I had also worked on. At Orbitz the hotel rates were cached in a very large cache that used an LRU. The LRU was actually from Apache, but the principle was simple. The cache was just a large array that we used a double hash for (for better usage). Collisions were immediately replaced although a bucket system could also be used to prevent throwing away recent cache stores. The LRU came from Apache, but was just a linked list of pointers to the elements in the hash. When an element was hit, that node in the linked list was removed and appended to the head. This was faster if the hash stored a struct that contained a point to the node in the linked list.

Question #3: How would you find the median of an array that can’t fit in memory?

This question I had some problems with. I wasn’t fond of theory and numerical computation at school and so I did what was needed and retained very little. Not to say that I couldn’t learn the material, just didn’t interest me all that much and to date I’ve never used any of it. Of course if I was going to work on the search engine code at Google, I would need to brush up. Anyways, I start thinking about slicing the array into segments and then distributing those. Each agent in the grid could further slice and distribute to build a tree. Then each agent would find the median and then push that value to its parent. That is as far as I got because I didn’t have the math to know if there was a algorithm to use those medians to find the global median. Well, after the call I busted out the CLR and found that all this stuff is called “Selection” algorithms. There is one algorithm that does exactly as I described but it then takes the “median-of-medians” and partitions the entire list around that value. This could probably be done on the grid and there was a PDF paper I stumbled across that talked about doing things this way. I’m not sure that is the best call though. After thinking about it more I wondered if the distributed grid could be a b-tree of sorts that uses the data set’s (e.g. real numbers) median value (always a known quantity if the data set is known) to build the tree. Once the tree was built you just recursively ask each node for its count and then ask for the ith element which would be i = count / 2. Again, I couldn’t really find anything conclusion online to state this was a possible solution.

Question #4: How would you randomly select N elements from an array of unknown size so that an equal distribution was achieved for infinite selects?

This one I sorta understood and got a very minimal answer to. I started by thinking about selecting from the array N elements. If there were more elements, you continue to select until you have 2N elements. Then you go back and throw out N elements at random. This operation continues until the entire array has been consumed and you now have a set of N element which are random. The problem is that the last N element in the array got favored because they only had the opportunity to be thrown out once, while the first N had the opportunity to be thrown out L / N times where L is the length. The interviewer then mentioned something about probability and I started thinking about the Java Memory Model (this really have little in common but that’s where my brain went). The memory model promotes Objects into tenure after they have been around for a while. In the same sorta system as you get to the end of the list the probability of keeping those last N elements is very low. You apply a probability weight to those elements that determines if they are kept or thrown out. Apparently you can solve this sucker by selecting a single element and applying some type of probability function to it and deciding to keep it or not. I again had little luck finding information online about this, and I ended up reading up on Monte Carlo, Marcov Chains, Sigma-Algebra and a ton of other stuff, but I have yet to put it all together to make a reasonable guess to the probability function. Since the length of the list is unknown, the function must use the number seen thus far in calculating the probability of keeping an element. And, in order to handle an array of length N, it must select the first N always. Therefore, it must have some mechanism for going back and throwing values out. So, I think I was on the right track, just didn’t get all the way to the end.

Boulder office

Second round of interviews was at the Boulder office. I interviewed with two guys from there and again things were very specific towards algorithms. The questions I received from these two were:

Question #1: If you have a list of integers and I ask you to find all the pairs in the list whose sum equals X, what is the best way to do this?

This is pretty simple brute force. You can iterate over the list and for each value figure out the value necessary to sum to X. Then just brute force search the list for that value. You can however speed this up with a sorted list by proximity searching. You’ll be able to find the second value faster based on your current position in the list and the values around you. You could also split the list into positive and negative around an index, saving some time. You could hash the values. There are lots of ways to solve this sucker, it just depends on the constraints you have, the size of things and what type of performance you need.

I had a few other questions in Boulder, but I can’t for the life of me recall what they were. Nothing too difficult. Just all algorithms and time calculations (big-o).

California office

Last interview was in California and they flew me out for it. I was put up quite a ways from the campus in a slightly seedy motel. Hey, I figured they would Microsoft me (Microsoft put us up at the Willows Lodge for the first MTS, which is a 4/5 star in Washington. Very posh) because they are Google and have a few billion in the bank. I wonder if they did that to throw me off a bit and see how I did under less than optimal conditions, but who knows.

Question #1: Find all the anagrams of a given word

This was the start of my demise. I asked the interviewer why we were going to optimize the dictionary when it was only a few 100K of words. Well, that was the wrong approach for sure and we started arguing about the size of the common dictionary. He said it was like 1 million words and I said it was more like 150K. Anyways, doesn’t matter really, but this interviewer and I had a personality conflict and his accent was so thick I kept having to ask him to repeat himself. I think this is why they didn’t hire me.

Anyways, this problems simple. Just sort each word, use that as the key into a Map whose value is a list of all the words with those letters. Of course he and I were arguing so much about things by this point I didn’t get the answer and he had to tell me, but the answers simple.

Question #2: If you have an algorithm for selecting candidates that has 10% error and you only select 10% from the available list, how many bad candidates do you select given a list of 100? How can you reduce the number of bad candidates if you can’t fix the error?

I was asked this question by the same interviewer from Question #1 and we were still at odds. Well, I started to solve it and he kinda got annoyed. He got up and started trying to solve it for me (strange). Then he sat there for 3-5 minutes trying and eventually gave up without the solution. The best part was when he said, “just trust me”. I was trying to do the math to solve this thing and he couldn’t even start to do the math and finally gave up. This really tipped me off to the fact that Google has a list of questions that interviewers can pick from and this guy picked one and forgot the solution. He hadn’t ever solved this problem himself, that I’m sure of.

As for my solution, I wanted to see the set sizes reduce as the variables changed. If you have an error of 10%, that means you either throw out 10 good candidates or hire 10 bad candidates from a pool of 100, or some mixture of those (e.g. 5 good 5 bad or 3 good 7 bad). Since error is fixed the only way to reduce the number of bad candidates hired is to reduce the initial set size. You want to reduce 100 to 10 or 5. That way you minimize your error. The issue then is that since error is fixed, over time you still hire the same number of bad candidates. As you repeat the process using a smaller set size, you eventually end up with the same number of bad candidates as you would with the original set size.

So, and this is what I argued with the interviewer, the only real solution with the information provided and without making large assumptions is to reduce the error. You have to somehow fix the problem of 10% error because it doesn’t matter in the long run what the set size is. Of course, he didn’t want to discuss that issue, just wanted me to solve the original problem.

Question #3: More discussion of the sum problem

We talked more about the sum problem from the Boulder interview. We discussed reducing the processing time, finding the sums faster and pretty much all the permutations you can think of. It wasn’t too bad and the guy interviewing me for this question was okay. One strange thing was he had a tag-along that was a new hire and the tag-along kept sorta smiling at me. That was really disconcerting. Again, this seemed like they were trying to throw me off or something. This was very unprofessional in my opinion, but whatever.

Question #4: Two color graph problem

This is the old graph coloring problem that everyone gets in school. He wanted some pseudo code for this sucker and I busted out a quick and dirty recursion for it. Of course there were issues in my code, because I only had about 2 minutes to do it. We fixed those and then tried to solve it without recursion. This is always pretty simple to do as long as you can tail it. Just use a linked list and anytime you hit a graph node that has links, add those nodes to the end of the list. Then you can just iterate over the list until there are no more elements. This is how Savant traverses the dependency graph it builds, so I’ve done it before.

Summary

The interesting point of the entire process of interviewing with Google was that I was never asked a single question that was not algorithmic. I’ve talked to dozens of other folks that have interviewed and they got questions about design, classes, SQL, modeling and loads of other areas. I think I understand why they were asking me these questions, ego. I’ve worked on large scale systems, which probably are larger than a lot of the applications that most folks at Google work on. I mean not everyone there can work on the search engine or GMail. Most of them probably work on smaller apps and only a few get to touch the big ones. I think, and have no supporting evidence to back it up, that because I worked at Orbitz the interviewers out in California wanted to show me up. The humorous part of this is that by doing so they firmly proved their own interview question. By allowing ego and dominance to drive their hiring process they are increasing their error. I hope Google’s smart enough to put in checks and balances and remove the interviewers like this from the system, because like I mentioned above, the only way to fix the problem is to reduce the error and the only way to reduce the error is to ensure that the interviewers are going to make the best decision possible for the company and not for themselves.

Aug 092007
 

I was reading the last news letter from the guys over at JavaLobby and Matt Schmidt mentioned something about restarts. Here’s his comment:

Sometimes It’s Ok To Restart Your JVM
Now, I’m sure I’ll catch a flack for saying this, but I’m definitely not the first. Sometimes, it is ok to just restart your JVM. Now, I’m not just talking about restarting it because you’ve deployed some new code, no, I’m talking about just restarting it for good measure. Maybe you’re restarting it when it reaches a certain error condition or even a certain amount of memory. Some of us value our sleep at night, and when things start to go awry with software that you didn’t write and you can’t seem to fix it, we start to think about solutions that we don’t normally speak of.

It’s these solutions that many of you will scoff at, but sometimes a simple little monitoring hack can save a lot of headaches. These hacks can re-introduce a modicum of stability in a system that was previously not stale and can return some sanity to your developers who do occasionally need to sleep. So, the moral of the story is that you don’t always need to have the super clean solution; sometimes a hack works just a well. But remember, you have to go back to that problem and actually solve it. A hack is just that, a hack, and it won’t hold forever. Even duct table breaks eventually :)

Perhaps I’m a bit of a stability and uptime snob having worked at Orbitz, but this made me really uneasy. Matt, who works on pretty decently sized applications was advocating restarts and hackery. JavaLobby is probably an order of magnitude or two smaller than Orbitz with very different usage patterns. Therefore, it is probably okay for them to restart JVMs at 3 am sometimes or even schedule restarts. But I disagree that restarting as a practice due to some unknown instability is correct. Even if you didn’t write the unstable code, it still doesn’t mean that it shouldn’t be fixed. In fact, most of the time when something really goes awry, the folks that wrote the code are generally willing to help you fix it. And, most software we use these days is open source. Jump in there and fix it your self.

Also, I disagree that restarting JVMs is necessary or even safe. Once you have more than one server for an application a restart could actually impact other servers. You have to understand the issues surrounding restarts because overall system performance might be impacted by a simple restart. I’ve see more cascading failures due to a simple restart than I’d like to remember. The better solution is rarely a restart, but almost always a fix.

Lastly, just to put some perspective in it, at Orbitz we had many machines that could run without issue for months on end without requiring a restart. Most of the time restarts were necessary only during major system failures. However, even in those cases it was always an investigatory process in order to find and fix the bug that caused the instability and never something that was done regularly. However, I don’t fault Matt for this frame of mind. Many applications are built based on restarts and often restarts become stability best practices at some companies. Having worked at a company whose many goal was to ensure that anyone in the world could book their travel 24/7/365, restarts just weren’t on the menu.

Feb 012007
 

I got the official email from Borders today about pre-ordering my copy of the next Harry Potter book. Unfortunately their online ordering system for the book has been down all day. They obviously weren’t expecting such a huge response and probably sent out the same email to all 500 million folks who signed up to be notified. Crazy madness I tell you!

Maybe they need to reduce the work on the web boxes and put it into a cluster or space on the backend. Then at least we could see the website instead of a timeout. They definitely need a fail forward (i.e. fail fast) approach to their websites.

[tags] harry potter, borders, distributed computing, fail forward, fail fast[/tags]