The phone screen
Question #1: How would you go about making a copy from a node in a directed graph?
This I had done before for Savant’s dependency tree. I had never distributed this type of operation, but since the operation is simple on a single machine distribution seems to be just a matter of dividing and then sharing of information via standard distributed algorithm operations like scatter, collect, etc.
Question #2: How would you write a cache that uses an LRU?
This I had also worked on. At Orbitz the hotel rates were cached in a very large cache that used an LRU. The LRU was actually from Apache, but the principle was simple. The cache was just a large array that we used a double hash for (for better usage). Collisions were immediately replaced although a bucket system could also be used to prevent throwing away recent cache stores. The LRU came from Apache, but was just a linked list of pointers to the elements in the hash. When an element was hit, that node in the linked list was removed and appended to the head. This was faster if the hash stored a struct that contained a point to the node in the linked list.
Question #3: How would you find the median of an array that can’t fit in memory?
This question I had some problems with. I wasn’t fond of theory and numerical computation at school and so I did what was needed and retained very little. Not to say that I couldn’t learn the material, just didn’t interest me all that much and to date I’ve never used any of it. Of course if I was going to work on the search engine code at Google, I would need to brush up. Anyways, I start thinking about slicing the array into segments and then distributing those. Each agent in the grid could further slice and distribute to build a tree. Then each agent would find the median and then push that value to its parent. That is as far as I got because I didn’t have the math to know if there was a algorithm to use those medians to find the global median. Well, after the call I busted out the CLR and found that all this stuff is called “Selection” algorithms. There is one algorithm that does exactly as I described but it then takes the “median-of-medians” and partitions the entire list around that value. This could probably be done on the grid and there was a PDF paper I stumbled across that talked about doing things this way. I’m not sure that is the best call though. After thinking about it more I wondered if the distributed grid could be a b-tree of sorts that uses the data set’s (e.g. real numbers) median value (always a known quantity if the data set is known) to build the tree. Once the tree was built you just recursively ask each node for its count and then ask for the ith element which would be i = count / 2. Again, I couldn’t really find anything conclusion online to state this was a possible solution.
Question #4: How would you randomly select N elements from an array of unknown size so that an equal distribution was achieved for infinite selects?
This one I sorta understood and got a very minimal answer to. I started by thinking about selecting from the array N elements. If there were more elements, you continue to select until you have 2N elements. Then you go back and throw out N elements at random. This operation continues until the entire array has been consumed and you now have a set of N element which are random. The problem is that the last N element in the array got favored because they only had the opportunity to be thrown out once, while the first N had the opportunity to be thrown out L / N times where L is the length. The interviewer then mentioned something about probability and I started thinking about the Java Memory Model (this really have little in common but that’s where my brain went). The memory model promotes Objects into tenure after they have been around for a while. In the same sorta system as you get to the end of the list the probability of keeping those last N elements is very low. You apply a probability weight to those elements that determines if they are kept or thrown out. Apparently you can solve this sucker by selecting a single element and applying some type of probability function to it and deciding to keep it or not. I again had little luck finding information online about this, and I ended up reading up on Monte Carlo, Marcov Chains, Sigma-Algebra and a ton of other stuff, but I have yet to put it all together to make a reasonable guess to the probability function. Since the length of the list is unknown, the function must use the number seen thus far in calculating the probability of keeping an element. And, in order to handle an array of length N, it must select the first N always. Therefore, it must have some mechanism for going back and throwing values out. So, I think I was on the right track, just didn’t get all the way to the end.
Boulder office
Second round of interviews was at the Boulder office. I interviewed with two guys from there and again things were very specific towards algorithms. The questions I received from these two were:
Question #1: If you have a list of integers and I ask you to find all the pairs in the list whose sum equals X, what is the best way to do this?
This is pretty simple brute force. You can iterate over the list and for each value figure out the value necessary to sum to X. Then just brute force search the list for that value. You can however speed this up with a sorted list by proximity searching. You’ll be able to find the second value faster based on your current position in the list and the values around you. You could also split the list into positive and negative around an index, saving some time. You could hash the values. There are lots of ways to solve this sucker, it just depends on the constraints you have, the size of things and what type of performance you need.
I had a few other questions in Boulder, but I can’t for the life of me recall what they were. Nothing too difficult. Just all algorithms and time calculations (big-o).
California office
Last interview was in California and they flew me out for it. I was put up quite a ways from the campus in a slightly seedy motel. Hey, I figured they would Microsoft me (Microsoft put us up at the Willows Lodge for the first MTS, which is a 4/5 star in Washington. Very posh) because they are Google and have a few billion in the bank. I wonder if they did that to throw me off a bit and see how I did under less than optimal conditions, but who knows.
Question #1: Find all the anagrams of a given word
This was the start of my demise. I asked the interviewer why we were going to optimize the dictionary when it was only a few 100K of words. Well, that was the wrong approach for sure and we started arguing about the size of the common dictionary. He said it was like 1 million words and I said it was more like 150K. Anyways, doesn’t matter really, but this interviewer and I had a personality conflict and his accent was so thick I kept having to ask him to repeat himself. I think this is why they didn’t hire me.
Anyways, this problems simple. Just sort each word, use that as the key into a Map whose value is a list of all the words with those letters. Of course he and I were arguing so much about things by this point I didn’t get the answer and he had to tell me, but the answers simple.
Question #2: If you have an algorithm for selecting candidates that has 10% error and you only select 10% from the available list, how many bad candidates do you select given a list of 100? How can you reduce the number of bad candidates if you can’t fix the error?
I was asked this question by the same interviewer from Question #1 and we were still at odds. Well, I started to solve it and he kinda got annoyed. He got up and started trying to solve it for me (strange). Then he sat there for 3-5 minutes trying and eventually gave up without the solution. The best part was when he said, “just trust me”. I was trying to do the math to solve this thing and he couldn’t even start to do the math and finally gave up. This really tipped me off to the fact that Google has a list of questions that interviewers can pick from and this guy picked one and forgot the solution. He hadn’t ever solved this problem himself, that I’m sure of.
As for my solution, I wanted to see the set sizes reduce as the variables changed. If you have an error of 10%, that means you either throw out 10 good candidates or hire 10 bad candidates from a pool of 100, or some mixture of those (e.g. 5 good 5 bad or 3 good 7 bad). Since error is fixed the only way to reduce the number of bad candidates hired is to reduce the initial set size. You want to reduce 100 to 10 or 5. That way you minimize your error. The issue then is that since error is fixed, over time you still hire the same number of bad candidates. As you repeat the process using a smaller set size, you eventually end up with the same number of bad candidates as you would with the original set size.
So, and this is what I argued with the interviewer, the only real solution with the information provided and without making large assumptions is to reduce the error. You have to somehow fix the problem of 10% error because it doesn’t matter in the long run what the set size is. Of course, he didn’t want to discuss that issue, just wanted me to solve the original problem.
Question #3: More discussion of the sum problem
We talked more about the sum problem from the Boulder interview. We discussed reducing the processing time, finding the sums faster and pretty much all the permutations you can think of. It wasn’t too bad and the guy interviewing me for this question was okay. One strange thing was he had a tag-along that was a new hire and the tag-along kept sorta smiling at me. That was really disconcerting. Again, this seemed like they were trying to throw me off or something. This was very unprofessional in my opinion, but whatever.
Question #4: Two color graph problem
This is the old graph coloring problem that everyone gets in school. He wanted some pseudo code for this sucker and I busted out a quick and dirty recursion for it. Of course there were issues in my code, because I only had about 2 minutes to do it. We fixed those and then tried to solve it without recursion. This is always pretty simple to do as long as you can tail it. Just use a linked list and anytime you hit a graph node that has links, add those nodes to the end of the list. Then you can just iterate over the list until there are no more elements. This is how Savant traverses the dependency graph it builds, so I’ve done it before.
Summary
The interesting point of the entire process of interviewing with Google was that I was never asked a single question that was not algorithmic. I’ve talked to dozens of other folks that have interviewed and they got questions about design, classes, SQL, modeling and loads of other areas. I think I understand why they were asking me these questions, ego. I’ve worked on large scale systems, which probably are larger than a lot of the applications that most folks at Google work on. I mean not everyone there can work on the search engine or GMail. Most of them probably work on smaller apps and only a few get to touch the big ones. I think, and have no supporting evidence to back it up, that because I worked at Orbitz the interviewers out in California wanted to show me up. The humorous part of this is that by doing so they firmly proved their own interview question. By allowing ego and dominance to drive their hiring process they are increasing their error. I hope Google’s smart enough to put in checks and balances and remove the interviewers like this from the system, because like I mentioned above, the only way to fix the problem is to reduce the error and the only way to reduce the error is to ensure that the interviewers are going to make the best decision possible for the company and not for themselves.