Artem Maksimenko, software engineer at C3D Labs, presents updates to the C3D Web Vision module and shares the product roadmap.
The C3D Web Vision component integrates 3D model viewing capabilities into web applications. Two major releases, 1.8 and 1.9, offer a lot of new functions.
The measurement tool now has an orthogonal dimension (Fig. 1). It is a linear dimension located on a plane. It shows the distance between two parallel planes. The chain dimension is another new addition. It is a linear dimension consisting of an unlimited number of segments. It shows the total length of the segments (Fig. 2). Another new tool measures the area of a region bounded by a closed contour connecting a set of points (Fig. 3).
The cross-section functionality has been significantly upgraded.
The cutting plane tool is now interactive: the user can manipulate the plane using handles. Cross-section planes can be moved along the Z axis and rotated in two planes about the X and Y axes (Fig. 4). Also, the handles can be repositioned within the cross-section plane.
It is worth noting a brand-new tool: the cutting cube. Actually, these are six intersecting cross-section planes put together (Fig. 5). The cube has a complete set of handles for:
- linear movement along the three main axes X, Y, Z and the three main planes
- scaling along these axes
- rotation about these axes.
With these tools, the user can conveniently view truncated models in 3D.
There are also new features in the camera orientation symbol. Previously, it was the standard symbol with three arrows. Now, there is an alternative representation: the view orientation cube (Fig. 6). Unlike the three arrows, it is interactive. When you hover the mouse pointer over the cube elements, they are highlighted in blue. Click to move the camera to view the selected plane.
The camera now has two new functions. The first is the Flythrough feature. You can now “fly” freely around the scene and "fly around" the objects (Fig. 7). The second feature is called First-Person View or FPV. The user navigates the scene just like in computer games (Fig. 8).
There is a new annotation tool: Point of Interest (POI). A POI represents a point in a scene. Its coordinates are user-defined. A PNG image is displayed at a POI. The relative size of the image does not depend on the scene scale. That is, it does not change when scaling, and its plane is always parallel to the screen.
The animation (Fig. 9) shows how to use the POI. When the “red pencil” annotation is not edited, it is hidden. Instead, a POI object (an M-like symbol) is displayed to show the red pencil position.
Another new annotation object is the image (Fig. 10). Unlike POI, it does not link to the camera, and therefore it does not get scaled or rotated automatically as we manipulate the camera. It can also be translated and rotated. The user can specify the image, its dimensions, position, and 3D rotation angles.
Scene background images are now supported (Fig. 11). In earlier versions, only the background color (solid or gradient) could be specified; now the user can add a PNG image as a background. There are several background fill options: tile, fit, fill, crop, leave unchanged, and pan. The Pan option is particularly convenient for panoramic images taken with a 360° camera. Such images are "wrapped" around the scene, creating a sense of presence.
It is just a fraction of the new features. There are many more.
Now let's discuss the future of C3D Web Vision. Since the first public release, we have received extensive user feedback. Users have diverse needs: some require standard features, such as visualizing 3D models in a browser, while others request unique functionality.
And we kept asking ourselves: what did our user want? The answer is: browser-based geometry visualization, support for third-party formats (both proprietary and C3D), viewing attributes (important to the user but irrelevant to us), creating projections, collision detection, use of our GUI components, and so on. All of this can be summarized in a single overarching request: the user wants to minimize their effort. (Fig. 12).
We have analyzed the user needs and realized that visualization alone was not enough. Users want a more flexible, comprehensive architecture. It is still a concept under discussion, but we have already begun the development.
C3D Web Vision used to include the components shown in the white frame (Fig. 13): the visualization library and client as the frontend, and the visualization service as the backend. Now the structure has changed.
Our modules are shown in blue in Fig. 13. The frontend is on the left. It mostly contains JavaScript libraries. The backend is on the right. It consists of microservices. It is a microservice architecture overall. Each component operates independently and can be used separately. User components are shown in orange to present how they interact with our architecture. This is for reference only.
Let's learn more about the components. The component names represent their functions and may not be final.
We’ll begin with visualization. We used to have a single microservice (Web Graphics in the diagram) responsible for both graphics processing and reading various formats through our converters. Now we have divided it into dedicated services. There is a separate file import microservice, while the graphics service focuses solely on preparing graphics for visualization. Web Graphics Client JS is a frontend visualization client (a JavaScript library). It requests graphic data from the service and exposes it to the JavaScript environment.
We are making a dedicated visualization library (Web Renderer JS in the diagram). Initially, it was based on the source code of the C3D Vision desktop visualizer, but over time, as we developed it and analyzed user feedback, it has evolved, although its foundation remains the same.
One of the key C3D Web Vision components is the Web Converter service. We already mentioned it briefly above. Initially, it was embedded into the graphics service. As Web Vision developed, it became clear that users needed a wider range of data read from the model file than just geometry. This prompted us to create a separate service that extracts four different types of data from geometry files.
The first type, of course, is a geometry representation in the C3D format. It was already available for visualization in Web Vision. The second type are attributes in the JSON format. They are imported “as is”. The third type of data is the model tree in the JSON format. Although it is a part of the C3D file, recent optimizations have made the tree unsuitable for direct access. Finally, for those who prefer to use third-party visualization tools, we extract graphics as a triangle mesh. We are still discussing how to transfer meshes between the modules.
All these four data types are linked with unique identifiers (UUIDs) for easier sharing.
Another service (Web Collision Detection in the diagram) performs collision detection (Fig. 15). Internally, it accesses the C3D Collision Detection desktop module. Essentially, it provides a wrapper to use the module in the web environment. The service is already operational, but still needs testing and debugging.
We have also developed a special application with numerous examples for demonstration, testing, and debugging. It shows how to use the visualization library API and displays the code execution result (rendering). We are going to make this application public.
As of today, the graphics service (Web Graphics) is already available to users. In Q2, we focused on the file converter (Web Converter). In Q4, we’ll be enhancing the collision detection service (Web Collision Detection). In Q3 2026, we will work on the visualization library (Web Renderer JS), create a more user-friendly API, test, and optimize the solution. It is a considerable amount of work (Fig. 16).
Currently, we are intensively working on a new release of Web Vision 2.0. The key improvement of the new release is enabling quick, smooth, and convenient loading of large models into the browser. We are talking about models several gigabytes in size, which places extremely high demands on performance.
Fig. 17 shows a model of the Nizhny Novgorod Fair with many buildings. These are not just bounding boxes, but full-fledged models with all the interiors. Web Vision uses dynamic loading: simplified geometric shapes are displayed at a distance, and as you zoom in, the level of detail for the objects in the camera’s field of view increases. This technology is the key innovation in WebVision 2.0. Another significant improvement is switching to WebGL 2.0 to support modern browsers. We have also implemented many other optimization features.
As to rendering, we will add shadows and color normalization to improve visualization quality. When models are converted, colors often become distorted, appearing either too dull or excessively bright. New procedures will visually balance the color palette for a neat, professional look (Fig. 18). We will also display the contours of the selected geometry, so the contour will be visible throughout the entire model.
We are also going to add support for WebGPU, a promising standard for 3D hardware acceleration in browsers. It will give us better performance and improved visualization in the web environment.
As to the services, the priority is still their scalability and fault tolerance. The services will be able to run in parallel: if one of them fails, the other continues to operate, and once the failed service is restored, it resumes its work from the last processed point.
We will also introduce textures, support for PMI attributes, viewing of general notes, and importing multiple versions of KOMPAS-3D models.
The geometry functionality will support animation. The user will be able to set key points and change the geometry parameters in real time. The snaps will work with both mesh vertices and edges.
These functions will be implemented between Q3 2025 and Q1 2026, followed by refinement, testing, and release.

Artem Maksimenko,
Software Engineer,
C3D Labs





















