I wasted hours just trying to get my code to run the example in a reasonable time, and when I gave up I discovered that the input takes not much longer to run than the example.

So there’s a hint that doesn’t require a spoiler: don’t bother trying to get your program to work on the example in seconds; the input isn’t that much harder! Just try to get the program to work on the example, period!

Unfortunately, that’s not enough. The example doesn’t really illuminate the problem. Your code can run correctly on the example and run incorrectly on the problem input. This happens from time to time, but on a problem like today it’s ruinous.

To help me debug my program, I downloaded and ran several people’s solutions posted online (in other languages). I haven’t tried any of the solutions, but there’s a morbid satisfaction in the fact that different “solutions” produce different results on my input.

Some authors admit that they guessed at optimizations and got lucky; the optimizations probably don’t work in general. I’ve tried a few, and nope, they don’t work on my input (unless I’m implementing them wrong).

Some authors used parallelism. I may yet try that, since the problem is parallelizable, and otherwise I have to wait 5-10 minutes per attempt.

In short, I haven’t finished it yet. IMHO the puzzle is terribly flawed, for the reasons given above. And, yeah, sour grapes, too.

For every combination C of robots, keep track of the amount A of minerals they’ve obtained. Don’t add any new combination C where each robot’s A is no greater than what’s already been stored for the same C.

In retrospect, I feel kind of embarrassed that I didn’t think of this sooner.

Will post my solution later, as I’m late for work.

I could reuse tricks that I used in previous days, which made this relatively smooth.

I added a lot of optimizations by limiting the ranges of the count types, but failed to come up with an efficient Hash function for the Hashed_Maps. So I just used Unbounded_String.Hash with a string representation of the state. That’s something I’m not proud of.

Wasted a lot of time with a bug in the buying algorithm, resulting in the resources not being consumed - so my output was higher than the given example.

Run time is about 18s for Part 1, and 73s for both parts, so about 55s for Part 2.

I had a lot of false starts before I settled on something that works. Part A runs in 7 seconds, but when I add in part B it takes about six and a half minutes to run. My strategy is the same for both parts, I only vary the number of minutes it runs.

I finally settled on the idea of coming up with all the permutations of adding new robots. That is, one permutation might just be 10 or 15 Ore robots, basically however many you could add until time runs out. As I generate a permutation, each time I try to add one of the four types of robots, I try to add it at the first place it could be added. Obviously, some permutations will only contribute a few robots, if any (e.g. any permutation that starts with geode or obsidian will immediately fail). So basically, try each of the four robots, find the earliest place it can be placed, if it can at all. If it can, then try all four again, this time starting just after the minute where the first one was placed, and repeat. I keep track of the resources numbers so any time I try at a particular minute, I know how much has accumulated up to that point. Interestingly, I don’t have any containers, the recursive calls essentially contain the structure.

It was interesting to look at @rommudoh’s and @wutka 's solutions.

Yup. Same here.

For what it’s worth, I reasoned about the maximum likely number of robots and resources, and based my hash function on that. I was even able to get away with using relatively small prime numbers to build the hash. It seemed to work, and while this may vary by approach, at one point I noticed a non-trivial improvement to performance from clearing the “hashed cache” of previously explored states every time I bumped up a minute, the reasoning being that states from the previous minute would not affect the future. At the time, I was getting a queue of multi-million choices well before termination, so the hash would fill, and I was probably paying for clashes. Perhaps this would be better than caching a string representation?

Either way, the optimization I mention above was far more effective, bringing both parts down to mere seconds.

Thinking about it now, my assumption that you can clear the hash of previous states is actually the opposite of clever, as you can arrive at a certain state of robots earlier in some paths than in others. At the time, I was including the amount of time that had passed in my state, so it was technically correct to proceed as I was doing, though it just shows how bad my caching and optimization was at the time.

When I first read your description, I understood it to mean you were trying a depth-first search, and sure enough a skim of your code reveals recursion, so does that sound like an accurate description, or have I missed something? I didn’t study it carefully.

When I first read your description, I understood it to mean you were trying a depth-first search, and sure enough a skim of your code reveals recursion, so does that sound like an accurate description, or have I missed something? I didn’t study it carefully.

Yes, I that is an accurate description. I had tried several variations of DFS, the reason looking it as sequences of permutations worked for me is that my earlier attempts could try situations where they might be idle at points, trying to do various combinations of robots. For example, if I didn’t do that, then I would preclude the possibility of waiting a few more minutes to build a different robot. By doing it as a sequence of permutations, I then have the rule that with each robot in the sequence, I only place it at the first place it can occur, and then go to the next robot in the sequence. Because I am trying all possibly creation sequences, but creating each robot at the earliest possibility, I find the best possible score for each permutation.

I went back and tried doing this one with tasks, just to get familiar with tasks. I basically split the permutation creation into two parts, one that creates permutations of 6 items, and then the other to start from a permutation of 6 and try all the remaining permutations starting with that 6. Then I created two protected objects, one to return the next permutation of 6 items, and another to receive a score and keep track of the maximum score. Then, each task just loops, calling the protected object with the permutations to get its next starting point, and then uses the old recursive routine to do the rest of the computation, and then it registers its score with the other protected object. Once all the threads complete, I just get the max score from the protected object. I get roughly a 6x speedup on an 8-core/16 hyperthread machine, and maybe 7x on a 16-core machine. My guess is that there are particular permutations that take a lot longer to try so I am not spreading the longest running ones over multiple processors.

It would be interesting to try to do one with CUDA, but I think I need the LLVM Gnat for that and I’m not sure how to get that going.