I'm having trouble understanding the description for the last step for soft shadows on the website.
By the time we get to the last step, we have point_on_light (only for area lights) and we have intensity_at. We don't have any general function for sampling a light. The provided test case does not give a jitter sequence, so I guess that the point_on_light function doesn't apply here. What exactly am I supposed to be sampling? How is a sample generated? Should I just iterate over the center point of every cell for samples?
Hello garfieldnate, are you talking about the test named "lighting() samples the area light"? If so, there's some pseudocode immediately following the test, which suggests an implementation. Let me know if I've misunderstood your question.
Hi Jamis, thanks for responding. The test and pseudocode you referred to are the same ones I'm talking about. I do not know how to make this test pass, but my renders seem okay, anyway, so I've been working on the other bonus chapters. Here's my glamor shot for the soft shadows chapter:
I can't tell the difference between this and the picture provided on the website. I've also found two other RTC implementations on GitHub that completed this bonus chapter, but both of them were also missing this test, so it may be a common point of confusion.
Like I said before, at this point in the chapter there is no method for sampling a light. The pseudocode provided does not give a jitter sequence, and the method of sampling is unclear.
Looking at it again right now, I think what is meant to happen is that the original, non-jittered sampling is used, meaning that samples are taken from the middle of each cell in the light. This means going back to refactor a bit, which was unexpected (in general the book does a great job of notifying the reader about required refactoring). The sample iteration in intensity_at needs to be factored out for re-use, and point_on_light needs to be able to return points with and without jitter.
Since I can't tell the difference between our glamor shots and implementing this would possibly make rendering much slower, do you happen to have any images with and without this sampling implemented for comparison? Even just looking at the test, the difference is extremely small. Colors are discretized to 256 levels; 1 - (1/256) is 0.99609, while the expected intensity in the test case is 0.9965, which means it's actually not quite a single step in color difference (though the actual result will depend on rounding).