Bling includes many high-level function constructs dedicated to 2D and 3D graphics programming in the Bling.Graphics namespace. They include:
  • Texture as a R2 to Color function. The texture coordinate (R2) is a value between (0,0) and (1,1). Textures are often loaded from images or, in the case of WPF, brushes. However, textures can also be defined purely mathematically such as a meta-ball texture.
  • Pixel effect as a Texture to texture function. A pixel effect represents a texture transformation, which by itself represents a complete WPF 2D pixel shader.
  • Transition as a R1 to R1 to Texture to Pixel effect function. A transition represents a transition from one texture to another. The first R1 is the transitions progress (as measured from 0 to 1), while the second R1 is a random seed that parameterizes the transition's effect. The texture argument is the destination texture of the transition (what is shown undistorted when progress is one), while the resulting pixel effect will consume the original texture (what is shown when progress is zero).
  • Parametric curve as either a R1 to R2 function (2D curve) or R2 to R3 function (3D curve). Popular parametric curves include Bezier curves.
  • Parametric surface as a R2 to R3 function. Can be used to generate lighted vertices. A basic parametric surface is a cone, cylinder, or sphere.
  • Height field as a R2 to R1 function. A height field serves purposes: first it can be used to distort a surface (e.g., a parametric surface), and second, it can be used to implement bump mapping on a 2D texture. Height fields can come from textures or can be defined mathematically, such as height fields that form ripple or egg crate patterns.
  • Normal map as a R2 to R3 function. A normal map also serves two purposes: first, it can be used for general lighting and surface distortion computations, and second, it can be used to implement enhanced bump mapping on a 2D texture. Normal maps can come from a texture, or can be specified mathemetically. An interesting approach is to generate a normal map via transparent symbolic derivation, which is supported for parametric surfaces and curves.
  • Lighting as a R3 to R3 to Color to Color function. Lighting is a bit more complicated because it requires a position, normal, and an input color (if the lighting effect is diffuse). Common lighting effects include specular, diffuse, and ambient light generated from directional, spot, and point lights.

Note that R1 indicates double, R2 indicates a 2D point, and R3 indicates a 3D point.

Bling graphic functions can be composed without touching their arguments. For example, given two textures t0 and t1 then t0 + t1 is equivalent to the texture p => t0(p) + t1(p), which otherwise means that the colors in both textures are summed together at every point in a resulting texture. Other operations that are supported directly over Bling graphic functions include subtraction, multiplication, division, clamping (max and min), interpolating via lerp, and conditions. Bling graphic functions also support various kinds of transformations on their return values and arguments; e.g., t0.Distort(uv => 1 - uv) creates a horizontal and vertical flip of texture t0 by subtracting from its input point.

Graphic functions are successively applied to produce some kind of desired effect. For example, a lighting function can be applied to some positions and normals to create a pixel effect that can then be applied to a WPF image. Example:

image.Effect.Custom = 
  (light.Diffuse() + light.Specular(eyePosition, 2, 1))[Normals, image.Size];

The first part of this code composes two lighting effects via addition, then applies lighting effect to a normal map (Normals) and the size of the image to create a pixel effect. The normal map, in turn, is generated from a texture:

Texture Normals0 = new ImageBrushBl(new BitmapImage("Resources/3977-normal0.jpg".MakePackUri(typeof(Shaders))));
NormalMap Normals = (NormalMap)Normals0;

The explicit coercion from Texture to NormalMap reverses the standard encoding of a normal map as a bitmap:

public static explicit operator NormalMap(Texture Tex) {
  return new NormalMap(UV => {
    return Tex[UV].ScRGB * 2d - 1d;
  });
}

Result:
normal.PNG

Sometimes it is necessary to "take apart" a Bling graphic function to manipulate it more aggressively. For example, we can define a pixel effect directly as a function, consider:

image.Effect.Custom = (Input) => (uv) => {
 DoubleBl t = (uv.X + (uv.Y % 0.2 > 0.1).Condition(0.1, 0)) % 0.2;
 DoubleBl tx = (t - 0.1).Pow(2) * 80;//0.4
 tx = (t > 0).Condition(tx, -tx);
 DoubleBl t2 = uv.Y % 0.1;
 DoubleBl ty = (t2 - 0.05).Pow(2) * 400;//0.2
 ty = (t2 > 0).Condition(ty, -ty);
 BoolBl stride = (t).Abs < 0.015 | (t2).Abs < 0.016;
 return (light.Diffuse())[new Point3DBl(uv * image.Size, 0), 
  new Point3DBl(stride.Condition(ShaderLib.Noise[uv].ScR, tx), 
   stride.Condition(ShaderLib.Noise[1-uv].ScR, ty), 1).Normalize](Input[uv]);
};

Here we define a custom effect directly by manipulating the Input texture and uv texture coordinates. Given some manipluations, the diffuse lighting equation is then used to directly produce a grid brick effect on the image it is applied to. Result:

brick.PNG

Bling graphic functions can be used to create both 2D effects and 3D effects in Direct3D. For 3D, we must deal with both vertices and pixels, while for 2D WPF pixel shaders we were just dealing with pixels. To start, a 3D shape must be defined, such as from a mesh or from a parametric surface. Consider:

PSurface Surface = Geometries.Sphere.Displace(Geometries.EggCrate.Adjust(Frequency, Magnitude, Phase.Value * .25));

This code creates a sphere that is distorted by an egg crate height field. The equations for the sphere parametric surface and egg crate height field are easily defined through basic trigonometry:
public static readonly PSurface Sphere = new PSurface((PointBl p) => {
  PointBl sin = p.SinU;
  PointBl cos = p.CosU;
  return new Point3DBl(sin[0] * cos[1], sin[0] * sin[1], cos[0]);
});
public static readonly HeightField EggCrate = new HeightField((PointBl p) => {
  p = p.SinCosU;
  return p.X * p.Y;
});

Displacement and height field adjustment is also very mathematical:
public PSurface PSurface.Displace(Func<PointBl, DoubleBl> HeightField) {
 return new PSurface(p =>
  this[p] + Normals[p] * HeightField(p));
}
public HeightField HeightField.Adjust(DoubleBl Frequency, DoubleBl Magnitude, DoubleBl Phase) {
  return new HeightField(p => Magnitude * this[Frequency * (p + Phase)]);
}

Note that displacement is based on the normals of a surface, which for a parametric surface are computed via derivation. The height field is then added in the direction of a surface's normals to ensure a nice distortion. The height field itself is adjusted via the Adjust method, which allows us to set its frequency, magnitude, and phase (in this way, height fields are basically similar to waves). Before we can render this surface, we need to create some lights, points, and matrices:

MatrixBl<D4, D4> world = MatrixBl<D4, D4>.Identity;
Point3DBl eye = new Point3DBl(0f, -2d, -4d);
MatrixBl<D4, D4> view = eye.LookAt(0d, new Point3DBl(0, 1, 0));
MatrixBl<D4, D4> project = (.25.PI()).PerspectiveFov(500d / 500d, .1, 100d);
AmbientLightBl ambLight = new AmbientLightBl() { Color = Colors.White.Bl() };
SpotLightBl spotLight = new SpotLightBl() {
 Direction = new Point3DBl(1, -1, 1).Normalize,
 Color = Colors.Red.Bl(),
 Position = new Point3DBl(-2, 2, -2),
 InnerConeAngle = 50.ToDegrees(),
 OuterConeAngle = 180.ToDegrees(),
};
DirectionalLightBl dirLight = new DirectionalLightBl() {
 Direction = new Point3DBl(0, 0, 1),
 Color = Colors.White,
};

The world matrix defines where the surface is. For now, we set world to the identity matrix meaning that our surface is not rotated or translated. We define eye as where the viewer is and use this to create a view matrix. The project matrix is used to translate between 3D world coordinates and 2D screen coordinates. Finally, three lights are defined for use in lighting computations, which are wrappers are lights in WPF 3D. Before we can render, we also need to create a rendering device and a texture to use as the skin for the surface:
RenderDevice Device = new RenderDevice();
Texture Autumn = Device.TextureFromUri(AutumnUri);

Finally, we can render the surface:
Device.Render(Surface, 200, 200, (IntBl n, PointBl uv, IVertex vertex) => {
 vertex.Position = (Surface * (world * (view * project)))[uv]; 
 Point3DBl normal = (Point3DBl)(Surface.Normals[uv] * world).Normalize;
 Point3DBl usePosition = (Point3DBl)((Surface[uv] * (world)));
 Lighting L = spotLight.Specular(eye, 1d, .8) + ambLight.Ambient(0.3) + dirLight.Diffuse();
 ColorBl clr = Autumn[uv + 0.5];
 vertex.Color = L[usePosition, normal](clr);
});


A render function call specifies how many samples to make (in this case, 200 by 200 vertices for a total of 40,000 vertices), and a action to render each vertex by assigning a position and color for it. The first action assigns the vertex's final position by transforming the surface by the world, view and project matrix, then accessing the position at the current parametric coordinate uv. The next lines define a normal and position used in lighting computations, where these values are transformed only by the world matrix and obtained directly from the surface according to the current parametric coordinate uv. The lighting that is to be used is computed by simply adding multiple lighting options together, and a pixel is obtained from the autumn texture to act as the surface skin. This is then plugged into the lighting computation to specify the vertex's color.

Results at varying frequencies and magnitudes, as rendered in DirectX 10:
sphere0.PNG
sphere1.PNG
sphere2.PNG
sphere3.PNG

Last edited Jul 20, 2009 at 2:08 PM by mcdirmid, version 13

Comments

No comments yet.