Aug 292007
 

I received a reply to my Google post today. Since I get a lot of spam, I filter everything on this blog. Usually any comment that comes in I just make sure it isn’t porn or spam and approve it. This one was extremely interesting, but not very compelling, mostly because it was posted anonymously. This could mean it is a Google employee that didn’t want to be found or just some random person trolling (I do have his IP address however – hehe). Anyways, the comment is a personal attack, so at first I took offense and got defensive. But the more I thought about it the more I began to think about interviewing again. Here’s the comment:

Wow, this entire article is littered with hubris and cocky remarks, but the summary takes the cake. Why didn’t Google hire you? They were trying to best you in a battle of wits and you were too smart! Of course that must be it since you’re an algorithms God!

Seriously Bro, you need to check yourself. Your ego, not Google’s, probably cost you the position.

First things first… I want to clear the air about my post and this comment. The majority of my post has no cocky remarks or hubris. It’s just what happened, pretty simple. I agree that my post got somewhat more subjective near the end, but I was forced to draw conclusions because of the treatment I received. Just to give you perspective, the Google recruiter left me a voice mail telling me I was not selected because they “thought my coding skills were not very good”. You read that correctly, A VOICE MAIL. This was very unprofessional. My phone and Boulder interviews were great and very productive. However, my treatment in California and after was very poor.

As for being a God of algorithms, I clearly state I’m not. I had trouble with some of their questions and that’s the point. They want to challenge you, which I completely agree with.

As for egos and battles of wits, I disagree with the commenter and this is what I’ve been thinking about since I read this comment. Ego is an interesting beast. In an interview, there are a few of possibilities, two of which are:

1. The interviewee has an ego and think they are better than the interviewer or the company. In this case they usually just answer the questions and if they gets stumped it is my experience that they start arguing. In most cases the arguments are defensive and without any pragmatic basis.

2. The interviewer has an ego and think they are better than the interviewee. In this case the interviewer isn’t out to find good candidates. They are out to find the candidates that will make them feel smart. If you answer their questions well, try to create good dialogs or introduce any pragmatism, you’ll probably get the toss.

So, what happened with me? Well, I have no doubt that the dictionary question threw me off a bit. However, I was definitely excited about being able to interview at Google. Even though I didn’t know the answer, I was interested in trying to figure it out with a good dialog. My first reaction was that it might be an early optimization because it could be a complex solution. In truth the optimization isn’t complex, but I didn’t know the answer at first. So, I wanted to begin a dialog about it and try to work through it. This didn’t go over well. In the end the interviewer got annoyed and just gave me the answer after I asked for it (he was about to move on without finishing that question when I stopped him).

Next was the hiring error question. This question had very little information around it and I again was interested in solving it. I did what I normally did and asked some questions, trying to start a dialog about it. This again didn’t go over well. The interviewer actually tried to solve it. I’m not making that up, it actually happened. But he couldn’t. This really made me think that this interviewer had a lot of ego and access to a list of questions to pick from. He picked a few, but hadn’t actually solved them all. After the interview I talked a bit with my local mathematics wiz (2 master degrees and almost a PhD in case people think he’s just some quack I work with) just to see if the interviewer and I were missing something. He confirmed that the question didn’t make sense given the information and that the solution I posed was correct without more information and bounds. You have to reduce the error to fix things.

Lastly, my comments about the tag-along were completely subjective and editorial. I personally thought it was poor form to bring along someone who wasn’t going to participate in the interview. It was uncomfortable and made it difficult to concentrate. Was this ego, maybe. But probably just plain nerves.

Now, I think the commenter was specifically thrown off by my summary and once he had finished that, he forgot about the other stuff I wrote. This summary was completely editorial and just a plain old guess as to the end result. The reason I suggested that I was “grilled” because of my resume is that I’ve had a number of friends interview at Google. Many of them never got the questions I did and my phone interviewer even told me that his questions were the hardest that Google gave. So, why, when I did so well in three interviews prior, would everything fall apart in the end? How was it that when I called my friends at Google they were astonished that I was hired?

In honesty, I don’t know. So I have to hypothesize as to what happened in California. During both interviews I had in California I had a feeling that I wasn’t really a candidate. I wanted to get a sense for what life was like at the company. I asked the first interviewer if people went out for drinks or if there were company activities and his answer was, “I’ve got kids and I don’t do that”. Another interviewer took off part way through the interview. So, given my experience I drew out some conclusions. Were they accurate? Who knows, but I did warn readers that I had nothing to back it up. As for my points:

Have I worked on huge systems? You bet.

Are the systems I’ve built larger that the majority of other engineers? Yes. 3000-5000 servers, distributed, etc.

Do I think that I write solid code? Absolutely.

Do I think I could work at Google on huge systems? Yes.

For these reason, I wrote my summary and made a guess as to the result. This conclusion I came to was based on experience. Having been part of interviewing processes at a number of companies I have seen a few interviewers just clobber lesser candidates but hire them and pass over good candidates. Therefore, I believe that it is fundamentally important to ensure your interviewers are doing a good job and working in the best interest of the company. If they are allowing ego to interfere with making good selections, they shouldn’t be interviewing. Lastly, my summary only applies to my experience in California. My phone interview and Boulder interview were great. Not a single sign of ego during either.

Aug 272007
 

The phone screen

Question #1: How would you go about making a copy from a node in a directed graph?

This I had done before for Savant’s dependency tree. I had never distributed this type of operation, but since the operation is simple on a single machine distribution seems to be just a matter of dividing and then sharing of information via standard distributed algorithm operations like scatter, collect, etc.

Question #2: How would you write a cache that uses an LRU?

This I had also worked on. At Orbitz the hotel rates were cached in a very large cache that used an LRU. The LRU was actually from Apache, but the principle was simple. The cache was just a large array that we used a double hash for (for better usage). Collisions were immediately replaced although a bucket system could also be used to prevent throwing away recent cache stores. The LRU came from Apache, but was just a linked list of pointers to the elements in the hash. When an element was hit, that node in the linked list was removed and appended to the head. This was faster if the hash stored a struct that contained a point to the node in the linked list.

Question #3: How would you find the median of an array that can’t fit in memory?

This question I had some problems with. I wasn’t fond of theory and numerical computation at school and so I did what was needed and retained very little. Not to say that I couldn’t learn the material, just didn’t interest me all that much and to date I’ve never used any of it. Of course if I was going to work on the search engine code at Google, I would need to brush up. Anyways, I start thinking about slicing the array into segments and then distributing those. Each agent in the grid could further slice and distribute to build a tree. Then each agent would find the median and then push that value to its parent. That is as far as I got because I didn’t have the math to know if there was a algorithm to use those medians to find the global median. Well, after the call I busted out the CLR and found that all this stuff is called “Selection” algorithms. There is one algorithm that does exactly as I described but it then takes the “median-of-medians” and partitions the entire list around that value. This could probably be done on the grid and there was a PDF paper I stumbled across that talked about doing things this way. I’m not sure that is the best call though. After thinking about it more I wondered if the distributed grid could be a b-tree of sorts that uses the data set’s (e.g. real numbers) median value (always a known quantity if the data set is known) to build the tree. Once the tree was built you just recursively ask each node for its count and then ask for the ith element which would be i = count / 2. Again, I couldn’t really find anything conclusion online to state this was a possible solution.

Question #4: How would you randomly select N elements from an array of unknown size so that an equal distribution was achieved for infinite selects?

This one I sorta understood and got a very minimal answer to. I started by thinking about selecting from the array N elements. If there were more elements, you continue to select until you have 2N elements. Then you go back and throw out N elements at random. This operation continues until the entire array has been consumed and you now have a set of N element which are random. The problem is that the last N element in the array got favored because they only had the opportunity to be thrown out once, while the first N had the opportunity to be thrown out L / N times where L is the length. The interviewer then mentioned something about probability and I started thinking about the Java Memory Model (this really have little in common but that’s where my brain went). The memory model promotes Objects into tenure after they have been around for a while. In the same sorta system as you get to the end of the list the probability of keeping those last N elements is very low. You apply a probability weight to those elements that determines if they are kept or thrown out. Apparently you can solve this sucker by selecting a single element and applying some type of probability function to it and deciding to keep it or not. I again had little luck finding information online about this, and I ended up reading up on Monte Carlo, Marcov Chains, Sigma-Algebra and a ton of other stuff, but I have yet to put it all together to make a reasonable guess to the probability function. Since the length of the list is unknown, the function must use the number seen thus far in calculating the probability of keeping an element. And, in order to handle an array of length N, it must select the first N always. Therefore, it must have some mechanism for going back and throwing values out. So, I think I was on the right track, just didn’t get all the way to the end.

Boulder office

Second round of interviews was at the Boulder office. I interviewed with two guys from there and again things were very specific towards algorithms. The questions I received from these two were:

Question #1: If you have a list of integers and I ask you to find all the pairs in the list whose sum equals X, what is the best way to do this?

This is pretty simple brute force. You can iterate over the list and for each value figure out the value necessary to sum to X. Then just brute force search the list for that value. You can however speed this up with a sorted list by proximity searching. You’ll be able to find the second value faster based on your current position in the list and the values around you. You could also split the list into positive and negative around an index, saving some time. You could hash the values. There are lots of ways to solve this sucker, it just depends on the constraints you have, the size of things and what type of performance you need.

I had a few other questions in Boulder, but I can’t for the life of me recall what they were. Nothing too difficult. Just all algorithms and time calculations (big-o).

California office

Last interview was in California and they flew me out for it. I was put up quite a ways from the campus in a slightly seedy motel. Hey, I figured they would Microsoft me (Microsoft put us up at the Willows Lodge for the first MTS, which is a 4/5 star in Washington. Very posh) because they are Google and have a few billion in the bank. I wonder if they did that to throw me off a bit and see how I did under less than optimal conditions, but who knows.

Question #1: Find all the anagrams of a given word

This was the start of my demise. I asked the interviewer why we were going to optimize the dictionary when it was only a few 100K of words. Well, that was the wrong approach for sure and we started arguing about the size of the common dictionary. He said it was like 1 million words and I said it was more like 150K. Anyways, doesn’t matter really, but this interviewer and I had a personality conflict and his accent was so thick I kept having to ask him to repeat himself. I think this is why they didn’t hire me.

Anyways, this problems simple. Just sort each word, use that as the key into a Map whose value is a list of all the words with those letters. Of course he and I were arguing so much about things by this point I didn’t get the answer and he had to tell me, but the answers simple.

Question #2: If you have an algorithm for selecting candidates that has 10% error and you only select 10% from the available list, how many bad candidates do you select given a list of 100? How can you reduce the number of bad candidates if you can’t fix the error?

I was asked this question by the same interviewer from Question #1 and we were still at odds. Well, I started to solve it and he kinda got annoyed. He got up and started trying to solve it for me (strange). Then he sat there for 3-5 minutes trying and eventually gave up without the solution. The best part was when he said, “just trust me”. I was trying to do the math to solve this thing and he couldn’t even start to do the math and finally gave up. This really tipped me off to the fact that Google has a list of questions that interviewers can pick from and this guy picked one and forgot the solution. He hadn’t ever solved this problem himself, that I’m sure of.

As for my solution, I wanted to see the set sizes reduce as the variables changed. If you have an error of 10%, that means you either throw out 10 good candidates or hire 10 bad candidates from a pool of 100, or some mixture of those (e.g. 5 good 5 bad or 3 good 7 bad). Since error is fixed the only way to reduce the number of bad candidates hired is to reduce the initial set size. You want to reduce 100 to 10 or 5. That way you minimize your error. The issue then is that since error is fixed, over time you still hire the same number of bad candidates. As you repeat the process using a smaller set size, you eventually end up with the same number of bad candidates as you would with the original set size.

So, and this is what I argued with the interviewer, the only real solution with the information provided and without making large assumptions is to reduce the error. You have to somehow fix the problem of 10% error because it doesn’t matter in the long run what the set size is. Of course, he didn’t want to discuss that issue, just wanted me to solve the original problem.

Question #3: More discussion of the sum problem

We talked more about the sum problem from the Boulder interview. We discussed reducing the processing time, finding the sums faster and pretty much all the permutations you can think of. It wasn’t too bad and the guy interviewing me for this question was okay. One strange thing was he had a tag-along that was a new hire and the tag-along kept sorta smiling at me. That was really disconcerting. Again, this seemed like they were trying to throw me off or something. This was very unprofessional in my opinion, but whatever.

Question #4: Two color graph problem

This is the old graph coloring problem that everyone gets in school. He wanted some pseudo code for this sucker and I busted out a quick and dirty recursion for it. Of course there were issues in my code, because I only had about 2 minutes to do it. We fixed those and then tried to solve it without recursion. This is always pretty simple to do as long as you can tail it. Just use a linked list and anytime you hit a graph node that has links, add those nodes to the end of the list. Then you can just iterate over the list until there are no more elements. This is how Savant traverses the dependency graph it builds, so I’ve done it before.

Summary

The interesting point of the entire process of interviewing with Google was that I was never asked a single question that was not algorithmic. I’ve talked to dozens of other folks that have interviewed and they got questions about design, classes, SQL, modeling and loads of other areas. I think I understand why they were asking me these questions, ego. I’ve worked on large scale systems, which probably are larger than a lot of the applications that most folks at Google work on. I mean not everyone there can work on the search engine or GMail. Most of them probably work on smaller apps and only a few get to touch the big ones. I think, and have no supporting evidence to back it up, that because I worked at Orbitz the interviewers out in California wanted to show me up. The humorous part of this is that by doing so they firmly proved their own interview question. By allowing ego and dominance to drive their hiring process they are increasing their error. I hope Google’s smart enough to put in checks and balances and remove the interviewers like this from the system, because like I mentioned above, the only way to fix the problem is to reduce the error and the only way to reduce the error is to ensure that the interviewers are going to make the best decision possible for the company and not for themselves.

Aug 222007
 

Looking at JavaConfig for the first time. Here’s some thoughts:

1. The @Configuration annotation allows you to setup auto-wire and lazy initialization but also seems required if you don’t specify anything. Seems like a bit of overhead since you have to explicitly pass the Configuration objects into the ApplicationContext constructor. Shouldn’t that just assume everything passed to it is a Configuration or do you really need to define this annotation?

2. Okay, I’m totally dumb-founded by @Bean. Is this really forcing you to instantiate each of your beans by hand? Doesn’t this fundamentally go against DI/IoC/whatever? One of the benefits of DI is not ever instantiating you classes if you can avoid it. I when I say not ever I really mean not one single line of production code ever calls new on any class that is injected. Unit tests of course are exceptions.

The reason I’m being a bit harsh on this that over the weekend I did a sizable refactoring to Vertigo-Java and wanted to ensure that I didn’t break anyone. We’ll since Vertigo-Java applications should use DI for everything, I knew I could change constructors without impacting clients. And I did. I created new interfaces, implementations and refactored a lot of code out of services into other locations. I also added new bindings to classes like ServletContext and injected them into places I had originally been using static holders and sure enough, the 3 or so applications I had internally compiled and passed all their tests without a single line of code change. Here’s a simple example:

Now, since I also defined default implementations for these interfaces using the @ImplementedBy, nothing broke. This was great for me. I could change quite a lot of code and as long as everyone was just using the MyService interface and things were correctly injected, I didn’t break anyone.

Seems to be that @Bean pretty much forces you back to refactoring hell. Agreed that I only refactor the Configuration objects, but that means each Configuration object in all of my projects, libraries, components, etc and all of the projects that use them and projects that use those projects and…. You get the cascading problem of Configuration refactoring. This just sounds disastrous. Am I missing something?

Plus, it doesn’t seem like JavaConfig can even do automatic constructor injection. You HAVE to create your own objects. Is this really true, because that seems extremely painful to me. I much prefer constructor injection to setter injection if for nothing else than correctness.

3. ExternalBean seems like it handles strongly typed by-hand dependency injection to named objects. Other than that, it seems like it doesn’t do a whole lot. Of course, it seems to be a requirement because of the way that JavaConfig forces you to instantiate and wire up objects by hand.

4. I’m not even sure what to think about visibility modifiers. They seem to be something I would never use because they seem like a configuration nightmare. I guess it seems like if I’m gonna create an object, anyone who needs it should get it. I guess I’m lacking the experience of needing to hide objects from each other. This seems like it could also cause some major headaches for large development organizations.

5. Another thing that seems to go against my general sense of reason is that the fact that since you are defining methods, which can just be invoked in any Java code, the behavior of these methods varies depending on whether you are calling them directly or whether Spring is calling them. For example:

If I call this code:

Each invocation will return a new instance. However, if I pass this to Spring:

Each invocation will return the same instance. This doesn’t seem right to me. The code no longer behaves as standard Java code, which seems to be one of the big concepts behind JavaConfig. Instead, the behavior is supplemented by the annotations of JavaConfig and how Spring uses those to return instances.

I think overall that JavaConfig is a good step away from the generally complex and error prone Spring configuration XML, but doesn’t quite “do it” for me. It seems clunkier than it could be and with obvious examples of simplistic injection models such as Guice, I was hoping that it would have been more stream-lined.

Aug 132007
 

Okay, I spent a number of hours battling with Guice last night and found a simple bug in the Guice code, which is probably more a bug in Sun’s JVM than anything else. I don’t blame the Guice guys for causing my pain and suffering, I fully blame Sun. Looking at the JavaDoc of the Properties class, where the issue exists, we see it was written by Arthur van Hoff, Michael McCloskey and Xueming Shen. So, I have little choice but to blame them.

Okay, first off, let’s discuss the problem. Properties implements Map. In fact it extends Hashtable. The only problem is that it isn’t really Map unless you have generics and I mean true generics. Furthermore, since I not a fan of the generic Map since it still lets in Object on the get, containsValue, containsKey and remove methods, even if Properties implemented this interface correctly, it isn’t really all that safe from runtime issues. Properties is really a hash for Strings only. It also defines a hierarchy of String key-value-pairs wherein you can define default values using a tree like structure. The only problem is that Map doesn’t really do that so well. Not to mention Map has no concept of default values at all.

EDIT: how about an example

The issue I hit was that the Guice dudes where calling the entrySet method, which is defined on Map and implemented in Hashtable. Properties really should override EVERY method in Hashtable if it were to truly implement the Map interface as best it could. But guess what, it doesn’t even come close. So, if you use any of the methods not specifically defined on the Properties class, you are guaranteed to NOT get the correct results because it will not contain any of the defaults you might have setup. Defaults are only handled via composition because the Properties object contains a pointer to another Properties object that contains the defaults. This means that it has to find properties using a lookup method that realizes Properties are defined in a tree like structure.

So why doesn’t Sun just fix this f&*#ing thing (this post has been filtered by the Inversoft Profanity Filter BTW ;)? They can’t. Whoever wrote this originally pretty much ensured that this horrible abomination of a class could never be fixed. In order to ensure backwards serial compatibility, the default Properties must be a member variable. They can’t just go adding serialization methods or externalizable methods, because you can’t do that and keep it compatible. They can’t change the parent class or any member variables since that jacks up serialization and compile/runtime behavior. So, you are left with this horrible class that can never be fixed.

Okay, I’ve pretty much flamed Sun and these three engineers, who might not have actually written the original class and now I’d like to offer a suggestion to not look like I’m not constructive. Deprecate this class and write it correctly. Make the new one called PropertiesTree or something like that and implement it right this time. And a quick note to Sun: “Let me know if you need any help rewriting this.”

Aug 132007
 

I made a comment in my JVM restart post about some JSRs that address server restarts. Here are the JSRs and how I see them solving some of the problems.

121: Application Isolation API

This JSR addresses the need to have mini-JVMs inside a single JVM that are completely isolated from each other. They can be stopped, started, restarted, etc without impacting anything else running inside the overall JVM. This reduces the ramp up time considerably and is similar to how some VMs manage processes rather than managing entire VMs. This one actually completed last year. I’m very interested to see if it gets rolled into things because it could be extremely powerful.

277 and 294: Java Module APIs

This is similar to OSGi support that was approved in JSR 291, however it makes modifications to the JVM, which aren’t really addressed in 291 (from what I know, which is minimal). This JSR allows more freedom to control class-loading and management of classes within boxed contexts. It probably does not address threading issues related to mis-behaving classes inside a context, but together with JSR 121, this would be a excellent solution. Let’s hope Sun has the vision to see this as well.

284: Management APIs

This one comes from the folks at Google and addresses utilization and management of processing within a JVM. It specifically addresses the need to terminate mis-behaving processes within the JVM.

291: OSGi

This one seemed like it was marshaled through the JCP just to battle 277 and 294. I think OSGi is great, but it doesn’t address many of the JVM level problems in very large, high availability applications. It really only addresses services and how they are separate but accessible. I’d much rather see 277 and 294 fully realize what the JVM should have in terms of modularization and separation of executing. If we could see truly separate execution spaces within a JVM that strongly define their public and private interfaces, we could gain a lot of stability without requiring what I believe to be dangerous restarts.