Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Problems porting a GLSL shadertoy shader to unity

I'm currently trying to port a shadertoy.com shader (Atmospheric Scattering Sample, interactive demo with code) to Unity. The shader is written in GLSL and I have to start the editor with C:\Program Files\Unity\Editor>Unity.exe -force-opengl to make it render the shader (otherwise a "This shader cannot be run on this GPU" error comes up), but that's not a problem right now. The problem is with porting that shader to Unity.

The functions for the scattering etc. are all identical and "runnable" in my ported shader, the only thing is that the mainImage() functions manages the camera, light directions and ray direction itself. This has to be ofcourse changed sothat Unity's camera position, view direction and light sources and directions are used.

The main function of the original looks like this:

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // default ray dir
    vec3 dir = ray_dir( 45.0, iResolution.xy, fragCoord.xy );

    // default ray origin
    vec3 eye = vec3( 0.0, 0.0, 2.4 );

    // rotate camera
    mat3 rot = rot3xy( vec2( 0.0, iGlobalTime * 0.5 ) );
    dir = rot * dir;
    eye = rot * eye;

    // sun light dir
    vec3 l = vec3( 0, 0, 1 );

    vec2 e = ray_vs_sphere( eye, dir, R );
    if ( e.x > e.y ) {
        discard;
    }

    vec2 f = ray_vs_sphere( eye, dir, R_INNER );
    e.y = min( e.y, f.x );

    vec3 I = in_scatter( eye, dir, e, l );

    fragColor = vec4( I, 1.0 );
}

I've read through the documentation of that function and how it's supposed work at https://www.shadertoy.com/howto .

Image shaders implement the mainImage() function in order to generate the procedural images by computing a color for each pixel. This function is expected to be called once per pixel, and it is responsability of the host application to provide the right inputs to it and get the output color from it and assign it to the screen pixel. The prototype is:

void mainImage( out vec4 fragColor, in vec2 fragCoord );

where fragCoord contains the pixel coordinates for which the shader needs to compute a color. The coordinates are in pixel units, ranging from 0.5 to resolution-0.5, over the rendering surface, where the resolution is passed to the shader through the iResolution uniform (see below).

The resulting color is gathered in fragColor as a four component vector, the last of which is ignored by the client. The result is gathered as an "out" variable in prevision of future addition of multiple render targets.

So in that function there are references to iGlobalTime to make the camera rotate with time and references to the iResolution for the resolution. I've embedded the shader in a Unity shader and tried to fix and wireup the dir, eye and l sothat it works with Unity, but I'm completly stuck. I get some sort of picture which looks "related" to the original shader: (Top is original, buttom the current unity state)

unity shader comparison

I'm not a shader profesional, I only know some basics of OpenGL, but for the most part, I write game logic in C#, so all I could really do was look at other shader examples and look at how I could get the data about camera, lightsources etc. in this code, but as you can see, nothing works out, really.

I've copied the skelton-code for the shader from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights and some vectors from http://forum.unity3d.com/threads/glsl-shader.39629/ .

I hope someone can point me in some direction on how to fix this shader / correctly port it to unity. Below is the current shader code, all you have to do to reproduce it is create a new shader in a blank project, copy that code inside, make a new material, assign the shader to that material, then add a sphere and add that material on it and add a directional light.

Shader "Unlit/AtmoFragShader" {
    Properties{
        _MainTex("Base (RGB)", 2D) = "white" {}
    _LC("LC", Color) = (1,0,0,0) /* stuff from the testing shader, now really used */
        _LP("LP", Vector) = (1,1,1,1)
    }

        SubShader{
        Tags{ "Queue" = "Geometry" } //Is this even the right queue?

        Pass{
        //Tags{ "LightMode" = "ForwardBase" }
        GLSLPROGRAM

    /* begin port by copying in the constants */
    // math const
    const float PI = 3.14159265359;
    const float DEG_TO_RAD = PI / 180.0;
    const float MAX = 10000.0;

    // scatter const
    const float K_R = 0.166;
    const float K_M = 0.0025;
    const float E = 14.3;                       // light intensity
    const vec3  C_R = vec3(0.3, 0.7, 1.0);  // 1 / wavelength ^ 4
    const float G_M = -0.85;                    // Mie g

    const float R = 1.0; /* this is the radius of the spehere? this should be set from the geometry or something.. */
    const float R_INNER = 0.7;
    const float SCALE_H = 4.0 / (R - R_INNER);
    const float SCALE_L = 1.0 / (R - R_INNER);

    const int NUM_OUT_SCATTER = 10;
    const float FNUM_OUT_SCATTER = 10.0;

    const int NUM_IN_SCATTER = 10;
    const float FNUM_IN_SCATTER = 10.0;

    /* begin functions. These are out of the defines because they should be accesible to anyone. */

    // angle : pitch, yaw
    mat3 rot3xy(vec2 angle) {
        vec2 c = cos(angle);
        vec2 s = sin(angle);

        return mat3(
            c.y, 0.0, -s.y,
            s.y * s.x, c.x, c.y * s.x,
            s.y * c.x, -s.x, c.y * c.x
            );
    }

    // ray direction
    vec3 ray_dir(float fov, vec2 size, vec2 pos) {
        vec2 xy = pos - size * 0.5;

        float cot_half_fov = tan((90.0 - fov * 0.5) * DEG_TO_RAD);
        float z = size.y * 0.5 * cot_half_fov;

        return normalize(vec3(xy, -z));
    }

    // ray intersects sphere
    // e = -b +/- sqrt( b^2 - c )
    vec2 ray_vs_sphere(vec3 p, vec3 dir, float r) {
        float b = dot(p, dir);
        float c = dot(p, p) - r * r;

        float d = b * b - c;
        if (d < 0.0) {
            return vec2(MAX, -MAX);
        }
        d = sqrt(d);

        return vec2(-b - d, -b + d);
    }

    // Mie
    // g : ( -0.75, -0.999 )
    //      3 * ( 1 - g^2 )               1 + c^2
    // F = ----------------- * -------------------------------
    //      2 * ( 2 + g^2 )     ( 1 + g^2 - 2 * g * c )^(3/2)
    float phase_mie(float g, float c, float cc) {
        float gg = g * g;

        float a = (1.0 - gg) * (1.0 + cc);

        float b = 1.0 + gg - 2.0 * g * c;
        b *= sqrt(b);
        b *= 2.0 + gg;

        return 1.5 * a / b;
    }

    // Reyleigh
    // g : 0
    // F = 3/4 * ( 1 + c^2 )
    float phase_reyleigh(float cc) {
        return 0.75 * (1.0 + cc);
    }

    float density(vec3 p) {
        return exp(-(length(p) - R_INNER) * SCALE_H);
    }

    float optic(vec3 p, vec3 q) {
        vec3 step = (q - p) / FNUM_OUT_SCATTER;
        vec3 v = p + step * 0.5;

        float sum = 0.0;
        for (int i = 0; i < NUM_OUT_SCATTER; i++) {
            sum += density(v);
            v += step;
        }
        sum *= length(step) * SCALE_L;

        return sum;
    }

    vec3 in_scatter(vec3 o, vec3 dir, vec2 e, vec3 l) {
        float len = (e.y - e.x) / FNUM_IN_SCATTER;
        vec3 step = dir * len;
        vec3 p = o + dir * e.x;
        vec3 v = p + dir * (len * 0.5);

        vec3 sum = vec3(0.0);
        for (int i = 0; i < NUM_IN_SCATTER; i++) {
            vec2 f = ray_vs_sphere(v, l, R);
            vec3 u = v + l * f.y;

            float n = (optic(p, v) + optic(v, u)) * (PI * 4.0);

            sum += density(v) * exp(-n * (K_R * C_R + K_M));

            v += step;
        }
        sum *= len * SCALE_L;

        float c = dot(dir, -l);
        float cc = c * c;

        return sum * (K_R * C_R * phase_reyleigh(cc) + K_M * phase_mie(G_M, c, cc)) * E;
    }
    /* end functions */
    /* vertex shader begins here*/
#ifdef VERTEX
    const float SpecularContribution = 0.3;
    const float DiffuseContribution = 1.0 - SpecularContribution;

    uniform vec4 _LP;
    varying vec2 TextureCoordinate;
    varying float LightIntensity; 
    varying vec4 someOutput;

    /* transient stuff */
    varying vec3 eyeOutput;
    varying vec3 dirOutput;
    varying vec3 lOutput;
    varying vec2 eOutput; 

    /* lighting stuff */
    // i.e. one could #include "UnityCG.glslinc" 
    uniform vec3 _WorldSpaceCameraPos;
    // camera position in world space
    uniform mat4 _Object2World; // model matrix
    uniform mat4 _World2Object; // inverse model matrix
    uniform vec4 _WorldSpaceLightPos0;
    // direction to or position of light source
    uniform vec4 _LightColor0;
    // color of light source (from "Lighting.cginc")


    void main()
    {
        /* code from that example shader */
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

        vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
        vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);
        vec3 lightVec = normalize(_LP.xyz - ecPosition);

        vec3 reflectVec = reflect(-lightVec, tnorm);
        vec3 viewVec = normalize(-ecPosition);

        /* copied from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights for testing stuff */
        //I have no idea what I'm doing, but hopefully this computes some vectors which I need
        mat4 modelMatrix = _Object2World;
        mat4 modelMatrixInverse = _World2Object; // unity_Scale.w 
                                                 // is unnecessary because we normalize vectors

        vec3 normalDirection = normalize(vec3(
            vec4(gl_Normal, 0.0) * modelMatrixInverse));
        vec3 viewDirection = normalize(vec3(
            vec4(_WorldSpaceCameraPos, 1.0)
            - modelMatrix * gl_Vertex));
        vec3 lightDirection;
        float attenuation;

        if (0.0 == _WorldSpaceLightPos0.w) // directional light?
        {
            attenuation = 1.0; // no attenuation
            lightDirection = normalize(vec3(_WorldSpaceLightPos0));
        }
        else // point or spot light
        {
            vec3 vertexToLightSource = vec3(_WorldSpaceLightPos0
                - modelMatrix * gl_Vertex);
            float distance = length(vertexToLightSource);
            attenuation = 1.0 / distance; // linear attenuation 
            lightDirection = normalize(vertexToLightSource);
        }
        /* test port */
        // default ray dir
        //That's the direction of the camera here? 
        vec3 dir = viewDirection; //normalDirection;//viewDirection;// tnorm;//lightVec;//lightDirection;//normalDirection; //lightVec;//tnorm;//ray_dir(45.0, iResolution.xy, fragCoord.xy);

        // default ray origin
        //I think they mean the position of the camera here? 
        vec3 eye = vec3(_WorldSpaceCameraPos); //vec3(_WorldSpaceLightPos0); //// vec3(0.0, 0.0, 0.0); //_WorldSpaceCameraPos;//ecPosition; //vec3(0.0, 0.0, 2.4);

        // rotate camera not needed, remove it

        // sun light dir
        //I think they mean the direciton of our directional light? 
        vec3 l = lightDirection;//_LightColor0.xyz; //lightDirection; //normalDirection;//normalize(vec3(_WorldSpaceLightPos0));//lightVec;// vec3(0, 0, 1);

        /* this computes the intersection of the ray and the sphere.. is this really needed?*/
        vec2 e = ray_vs_sphere(eye, dir, R);
        /* copy stuff sothat we can use it on the fragment shader, "discard" is only allowed in fragment shader,
        so the rest has to be computed in fragment shader */
        eOutput = e;
        eyeOutput = eye;
        dirOutput = dir;
        lOutput = dir;
    }

#endif

#ifdef FRAGMENT

    uniform sampler2D _MainTex;
    varying vec2 TextureCoordinate;
    uniform vec4 _LC;
    varying float LightIntensity;

    /* transient port */
    varying vec3 eyeOutput;
    varying vec3 dirOutput;
    varying vec3 lOutput;
    varying vec2 eOutput;

    void main()
    {
        /* real fragment */

        if (eOutput.x > eOutput.y) {
            //discard;
        }

        vec2 f = ray_vs_sphere(eyeOutput, dirOutput, R_INNER);
        vec2 e = eOutput;
        e.y = min(e.y, f.x);

        vec3 I = in_scatter(eyeOutput, dirOutput, eOutput, lOutput);
        gl_FragColor = vec4(I, 1.0);

        /*vec4 c2;
        c2.x = 1.0;
        c2.y = 1.0;
        c2.z = 0.0;
        c2.w = 1.0f;
        gl_FragColor = c2;*/
        //gl_FragColor = c;
    }

#endif

    ENDGLSL
    }
    }
}

Any help is appreciated, sorry for the long post and explanations.

Edit: I just found out that the radius of the spehere does have an influence on the stuff, a sphere with scale 2.0 in every direction gives a much better result. However, the picture is still completly independent of the viewing angle of the camera and any lights, this is nowhere near the shaderlab version.

status2

like image 684
Maximilian Gerhardt Avatar asked Feb 06 '16 16:02

Maximilian Gerhardt


People also ask

Can I use GLSL in Unity?

Furthermore, Unity supports a version of GLSL similar to version 1.0. x for OpenGL ES 2.0 (the specification is available at the “Khronos OpenGL ES API Registry”); however, Unity's shader documentation [3] focuses on shaders written in Unity's own “surface shader” format and Cg/HLSL [4].

Does Shadertoy use GLSL?

Shadertoy.com is an online community and platform for computer graphics professionals, academics and enthusiasts who share, learn and experiment with rendering techniques and procedural art through GLSL code. There are more than 52 thousand public contributions as of mid-2021 coming from thousands of users.


1 Answers

It's look like you are trying to render a 2D texture over a sphere. It has some different approach. For what you trying to do, I would apply the shader over a plane crossed with the sphere.

For general purpose, look this article showing how to convert shaderToy to Unity3D.

There is some steps that I included here:

  • Replace iGlobalTime shader input (“shader playback time in seconds”) with _Time.y
  • Replace iResolution.xy (“viewport resolution in pixels”) with _ScreenParams.xy
  • Replace vec2 types with float2, mat2 with float2x2 etc.
  • Replace vec3(1) shortcut constructors in which all elements have same value with explicit float3(1,1,1)
  • Replace Texture2D with Tex2D
  • Replace atan(x,y) with atan2(y,x) <- Note parameter ordering!
  • Replace mix() with lerp()
  • Replace *= with mul()
  • Remove third (bias) parameter from Texture2D lookups
  • mainImage(out vec4 fragColor, in vec2 fragCoord) is the fragment shader function, equivalent to float4 mainImage(float2 fragCoord : SV_POSITION) : SV_Target
  • UV coordinates in GLSL have 0 at the top and increase downwards, in HLSL 0 is at the bottom and increases upwards, so you may need to use uv.y = 1 – uv.y at some point.

About this question:

Tags{ "Queue" = "Geometry" } //Is this even the right queue?

Queue references the order it will be rendered, Geometry is one of the first of, if you want you shader running over everything you could use Overlay for example. This topic is covered here.

  • Background - this render queue is rendered before any others. It is used for skyboxes and the like.
  • Geometry (default) - this is used for most objects. Opaque geometry uses this queue.
  • AlphaTest - alpha tested geometry uses this queue. It’s a separate queue from - Geometry one since it’s more efficient to render alpha-tested objects after all solid ones are drawn.
  • Transparent - this render queue is rendered after Geometry and AlphaTest, in back-to-front order. Anything alpha-blended (i.e. shaders that don’t write to depth buffer) should go here (glass, particle effects).
  • Overlay - this render queue is meant for overlay effects. Anything rendered last should go here (e.g. lens flares).
like image 199
Carlos Oliveira Avatar answered Nov 05 '22 17:11

Carlos Oliveira