Wow, that is tantalising.
I remember thinking years ago when I first heard of SE... "they should just wait a couple of years for photogrammetry to do 95% of this automatically". I'm not sure I still feel the same way - there are a lot of things that wouldn't be practical to do automatically, and as soon as you want to animate things you will need get in there and do things by hand and eye and with a human judgement any way. Also, since then, the SE team has started to use the triangulation/camera matching thing, which is kind of a manual version of photogrammetry.
However, I think the PhotoScan experiment shows one thing: how absolutely critically important texture, color and light is to capture the atmosphere of the original images. Despite the crude and glitchy geometry, it still feels exactly like the schoolhouse.
I definitely think you guys should look into how you can "lift" as many textures as possible from the original shots, and if possible just do high frequency detailing on it to make it work in HD. I'm not sure if that is possible considering how light etc affects the textures, and maybe you're already doing it where it is practical. But at the very least, I think you should do a serious color calibration pass when you get further with texture and lighting, and adjust the base colors so the shaded on-screen colors matches the original as close as it can.