Making Solid Lights
I’m slightly surprised at how rarely anyone asks us about how the lights for In The Dark work. I guess most people assume that the lights are not much different from lights in any other game, but in fact they are very different. In most games lighting is handled more or less entirely on the graphics card, with little or no feedback to the rest of the game. Also in most techniques lighting only has to be roughly accurate. Obviously, In The Dark requires a high level of accuracy and feedback or the game would be unplayable. In fact the lighting for In The Dark is handled almost entirely using the physics engine as it is a physical object within the game world.
Updating to the newest version of Box2D means we have rewritten and optimized the whole lighting process, but in the old and new version there are 4 main steps, collection, culling, sorting, and projecting. I’m not going to go into a huge amount of detail because most of it is very specific to In The Dark and wouldn’t apply to any other games. The technique itself is not unlike stencil lighting, which is sometimes used in 3d games to provide very sharp, clean looking shadows.
In The Dark doesn’t actually look at sprites to determine anything, it relies solely on the physics bodies that represent them. First it uses the bounding area of the light to determine what objects are within range and adds the location of all the corners of these objects to a list. Then the edges of the light are traced to find any intersections, and those are added to the list as well. This system does a poor job with very complicated shapes so the majority of objects in In The Dark can be easily simplified to a collection of a few boxes or triangles. All points are also tagged with what object they are attached to for future reference.
The next step is culling the unneeded points. This is also a two step process. First the points are checked to see if they are indeed within the bounds of the light. After this, raycasting is used to determine if the remaining points actually have line of sight to the light source. This is relatively straightforward and leaves us with a list of only points the light can actually reach.
After this the points are sorted based on their relative angles to the light source. Using the now sorted points it is possible to generate lighting triangles by simply drawing from the light source to the first and second points, second and third, and so on. But there is a problem, The third and fourth points in the example are attached to different objects. This is where the fourth step comes in.
A projection is made by tracing the path from the two neighboring points all the way to the “bottom” edge of a light and finding the nearest intersection with an object. Because all points have been accounted for, the points must fall on the same face of one object, meaning a single triangle will always be able to span the gap. This can occasionally get confused when objects overlap. For example when a box hits a wall, for a single frame the corner of the box will sometimes be inside the wall causing it to be culled. Special precautions must be taken to prevent spastic lighting in chaotic scenes.
At this point the video card at last gets involved, using shaders to make the otherwise uninteresting triangles into something that you would recognize as light. There are many small details I skipped over, mostly special scenario checks and optimizations. It’s worth noting that this is not just calculating lighting, but those triangles are actually physical objects in the world that only exist for the fraction of a second between frames before they are replaced with new ones. It’s pretty amazing how much can be calculated in a 60th of a second on any modern computer.