We would like to enhance our current 360° viewer to include “Image in Motion” and 3D WebGL enhancements to create a 3 Dimensional image viewer from 2D images. Currently we have images spaced at 20m apart with <10° differentiation between each. Images can be merged together to create a 3D appearance using a WebGL plugin recommended by the developer (you) - we have hundreds of thousands of 360° images set in sequential order. We would to "merge" images into a 3D geo referenced spatial viewer using webGL as the engine to build the 3D space. Currently our 'viewer' supports 2D 360° imagery from individual images.
We expect the algorithm to also "enhance" and extrude objects based on plane or perspective which is similar in each 360° 2D image source.
All objects/images can be loaded via API. - We will provide API data upon project approval
1. We require examples of similar "image synth" work or 3d work in a browser environment
2. WebGL experience
3. Able to stay on task and be flexible
In addition we would like to image morph existing viewer to merge 2D images as one long image dynamically loaded from api.
Ref: [url removed, login to view]
Structure from motion using 2D imagery
[url removed, login to view]
We would like to incorporate one of the plugins to our imagery platform.