02.05.2023 • C3D Vision

The C3D Vision Engine Manages 3D Texture Rendering on the Fly

The C3D Vision visualization engine functionality has been expanded. Now you can create and render 3D textures. In this post, we'll talk about the Vision API objects that handle textures, and show some demo code.

What's new in Vision API?

The new functionality supports 3D textures and controls the rendering. For this purpose, two new Vision API classes have been introduced: Texture3D and RenderPostProcess. Let's take a closer look at them.

Texture3D is a standard C++ class to handle 3D textures. To work with them, Vision creates an object of that class and initializes it with the texture data. It could be a texture bitmap or image file. The class gives an access to the texture width, height, and depth, can check for the alpha channel, set up rasterization filters or repeat mode of the texture.

class Texture3D
{
public:
    /// \ru Конструктор по умолчанию. \en Default constructor. \~
    Texture3D();
    /// \ru Конструктор копирования. \en Copy constructor. \~
    Texture3D(const Texture3D& other) = delete;
    /// \ru Деструктор. \en Destructor. \~
    virtual ~Texture3D();

    <...>
}

RenderPostProcess is also a standard class for the final post-processing of the scene. The input parameter for creating an object of this class is a string stream containing the code to be executed by the GPU: the shader. A RenderPostProcess object calls the code after rendering the scene and tweaks the texture as it is rendered. Such approach helps the user control the texture rendering on the fly. Besides, the RenderPostProcess class has methods to add matrix and texture objects with the string name parameters. The shader uses them to access the objects while texture rendering.

class RenderPostProcess
{
public:
    /// \ru Конструктор. \en Constructor. \~
    explicit RenderPostProcess(std::istream& pixelShader);
    /// \ru Конструктор. \en Constructor. \~
    explicit RenderPostProcess(std::istream& vertexShader, std::istream& pixelShader);
    /// \ru Деструктор. \en Destructor. \~
    ~RenderPostProcess();

    <...>
}

New Classes: Example

The 10_Texture3D project is enclosed with the C3D Vision distribution kit. The project demonstrates how to use 3D textures in a test application to visualize the thermal radiation distribution map for a motherboard heating inside an enclosure. Let's take a closer look.

First, open a model and add it to the scene. The texture will be rendered upon the model.

The C3D Vision Engine Manages 3D Texture Rendering on the Fly, photo 1
The model

Next, we create the std::stringstream pixelShader string stream. The shader definition in GLSL is added to the stream as the shaderSrc global variable.

std::stringstream pixelShader(shaderSrc);
RenderPostProcess* pPostProcess = new RenderPostProcess(pixelShader);

The variable can store any shader.

static const char shaderSrc[] =
"#version 110\n"

"varying vec2 VSN_TEXCOORD;"

"uniform sampler2D VSN_SCENE_COLOR_TEXTURE;"
"uniform sampler2D VSN_SCENE_DEPTH_TEXTURE;"
"uniform mat4 VSN_INV_VIEW_PROJECTION;"

"uniform sampler3D ValueTexture;"
"uniform sampler1D CoverTexture;"

"uniform mat4 WorldToTexture;"

"vec3 ToWorld(vec2 texCoord, float depth)"
"{"
"    vec3 screenPos = vec3(texCoord.xy, depth);"
"    <...>"

"    return position.xyz / position.w;"
"}"

"void main(void)"
"{"
"    float depth = texture2D(VSN_SCENE_DEPTH_TEXTURE, VSN_TEXCOORD).a;"
"    <...>"

"    vec4 colorTex = texture2D(VSN_SCENE_COLOR_TEXTURE, VSN_TEXCOORD);"
"    gl_FragColor = vec4(mix(colorTex.rgb, accum.rgb, accum.a  / (colorTex.a + accum.a)), max(colorTex.a, accum.a));"
"}";

After that, we create a pointer to the Texture3D class and initialized it in the loadTexture3D function.

Matrix3DF transform;
Texture3D* tex3d = loadTexture3D(transform);

The function reads the fv_texture3d.dat texture file with a stream object. The data from the file are used to initialize the std::vector<const unsigned char*> texData texture data vector.

static Texture3D* loadTexture3D(Matrix3DF& transform)
{
    QFile file(":/fv_texture3d.dat");
    <...>
        QDataStream dataStream(fileData);
    <...>

        // размер x-y-z сетки
        uint32_t sizeX, sizeY, sizeZ;
    dataStream >> sizeX;
    dataStream >> sizeY;
    dataStream >> sizeZ;

    // положение бокса
    Vector3DF origin;
    dataStream >> origin.x;
    dataStream >> origin.y;
    dataStream >> origin.z;

    // оси бокса
    Vector3DF axis[3];
    for (int i = 0; i < 3; i++)
    {
        dataStream >> axis[i].x;
        dataStream >> axis[i].y;
        dataStream >> axis[i].z;
    }

    transform = {
        axis[0].x, axis[0].y, axis[0].z, 0.0f,
        axis[1].x, axis[1].y, axis[1].z, 0.0f,
        axis[2].x, axis[2].y, axis[2].z, 0.0f,
        origin.x,  origin.y,  origin.z,  1.0f
    };

    // линейный массив значений в порядке обхода z-y-x
    auto size = sizeX * sizeY * sizeZ;
    std::vector<float> values(size);
    dataStream.readRawData(reinterpret_cast<char*>(values.data()),
        static_cast<int>(values.size() * sizeof(float)));

    const size_t bgra = 4;
    auto count = sizeX * sizeY * bgra;
    std::vector<const unsigned char*> texData;
    for (uint32_t z = 0; z < sizeZ; z++)
    {
        unsigned char* slice = new unsigned char[count];
        for (uint32_t y = 0; y < sizeY; y++)
        {
            for (uint32_t x = 0; x < sizeX; x++)
            {
                int value_index = z + sizeZ * y + sizeY * sizeZ * x;
                float value = values[value_index];
                auto sliceNum = (y * sizeX + x) * bgra;
                slice[sliceNum + 0] = 255;
                slice[sliceNum + 1] = 255;
                slice[sliceNum + 2] = 255;
                slice[sliceNum + 3] = 0;
                // 0 - отсутствие данных
                if (value != 0.0f)
                    slice[sliceNum + 3] = value * 255;
            }
        }
        texData.push_back(slice);
    }

After initialization, a texture object is created and the texture size and data are passed to it. We also set up rasterization filters and texture repeat mode for when it goes beyond its boundary.

    auto texture3D = new Texture3D();
    texture3D->Init(TextureFormat::BGRA, SizeI(sizeX, sizeY), texData);
    texture3D->SetFilters(TextureFilter::Linear, TextureFilter::Linear);
    texture3D->SetWrap(TextureWrap::NoRepeate, TextureWrap::NoRepeate, TextureWrap::NoRepeate);

    <...>
    return texture3D;
}

Now we add the resulting object and its name to RenderPostProcess* pPostProcess:

pPostProcess->AddTexture("ValueTexture", tex3d);
pPostProcess->AddMatrix("WorldToTexture", transform.Div());

Adding the color palette to color the heat distribution:

ColorsArray* paletteColors = new ColorsArray{
        {0, 0, 255, 0}, {0, 100, 255, 13}, {0, 185, 255, 13}, {0, 255, 255, 25},
        {0, 255, 159, 38}, {0, 255, 0, 51}, {159, 255, 0, 77}, {255, 255, 0, 102},
        {255, 185, 0, 128}, {255, 100, 0, 153}, {255, 0, 0, 205}
};
pPostProcess->AddTexture("CoverTexture", paletteColors);

Now, we pass post-processing to the scene:

graphicsScene()->SetPostProcess(pPostProcess);

Below is the final scene rendering.

The C3D Vision Engine Manages 3D Texture Rendering on the Fly, photo 2

We work hard to improve C3D Vision, adding new classes and methods to make it faster and easier to use. Our strive for perfection is driven not only by our CAD demand analysis but also by the user input who, just like us, strive to improve the visualization module.

Artem Maksimenko, Product Manager, C3D Labs
Author:
Artem Maksimenko
Product Manager, C3D Labs
Share
Up