# 2D to 3D Conversion

## 1. Introduction

The statement "Software should be simplest for user" is well known. But it is only particularly right. To prove this I would use following metaphor. The MiG-17 is much more easy for pilot than MiG-31. However MiG-31 is much more effective than MiG-17. Similarly the knowledge of C do not require knowledge of object oriented paradigm. But C++ is much more effective than C. I would like to prove that very effective scientific software could not be simplest for user. It can require knowledge of multiple inheritance. The sample of conversion of 2D to 3D is used as sample.

## 2. Background. Samples of multiple inheritance

Any real engineering task contains union of domains. This fact could be naturally expressed by usage of multiple inheritance. Here some natural samples concerning transmitter and receiver are considered.

### 2.1 Physical field domain

Typical task of field theory contains model of physical field and object that is consumer of physical field. One of such tasks is simulation of transmitter and receiver. Following picture presents both objects: Here Transmitter is object that implements interface `IPhysicalField` (Physical field).

```   /// <summary>
/// Physical Field
/// </summary>
public interface IPhysicalField
{
/// <summary>
/// Dimension of space
/// </summary>
int SpaceDimension
{
get;
}

/// <summary>
/// Count of components
/// </summary>
int Count
{
get;
}

/// <summary>
/// Type of n - th component
/// </summary>
/// <param name="n">Component number</param>
/// <returns>Type of n-th component</returns>
object GetType(int n);

/// <summary>
/// Type of transformation of n - th component
/// </summary>
/// <param name="n">Component number</param>
/// <returns>Transformation type</returns>
object GetTransformationType(int n);

/// <summary>
/// Calculates field
/// </summary>
/// <param name="position">Position</param>
/// <returns>Array of components of field</returns>
object[] this[double[] position]
{
get;
}
}
```

So radiation of transmitter is represented as physical field. This interface can be implemented by different objects. In my article Determination of orbits of artificial satellites this interface is implemented by "Gravitational field" object. The Receiver object implements interface `IFieldConsumer` (Consumer of field)

```    /// <summary>
/// </summary>
public interface IFieldConsumer
{
/// <summary>
/// Dimension of space
/// </summary>
int SpaceDimension
{
get;
}

/// <summary>
/// Count of external field
/// </summary>
int Count
{
get;
}

/// <summary>
/// </summary>
/// <param name="n">Field number</param>
/// <returns>The n - th field</returns>
IPhysicalField this[int n]
{
get;
}

/// <summary>
/// </summary>

/// <summary>
/// Removes field
/// </summary>
/// <param name="field">Field to remove</param>
void Remove(IPhysicalField field);

/// <summary>
/// Consumes field
/// </summary>
void Consume();

```

This interface is not only implemented by receivers. This interface could be implemented for example by "airplane" object. Surface of airplane reflects electromagnetic radiation field. Otherwise the "airplane" object could implement `IPhysicalField` interface. Physical field of "airplane" object is reflected radiation. The L 1 links receiver and transmitter. Its direction is association direction. It means that Transmitter does not know Receiver. But not vice versa. The L 1 link is object of `FieldLink` type. This object implements `ICategoryArrow` interface. `FieldLink` as `ICategoryArrow` contains `Source` and `Target` properties. Following code represents `Source` property:

```        /// <summary>
/// Source
/// </summary>
private IFieldConsumer source;

/// <summary>
/// Source
/// </summary>
ICategoryObject ICategoryArrow.Source
{
get
{
return source as ICategoryObject;
}
set
{
// Checks whether "value" implements IFieldConsumer interface
// If not then throws exception
// If yes then assigns object to source field
source = CategoryOperations.GetSource<IFieldConsumer>(value);
}
}

```

So the source of `FieldLink` is inevitably object which implements `IFieldConsumer` interface. Similarly `Target` property is implemented. But target of `FieldLink` is inevitably object which implements `IPhysicalField` interface.

### 2.2 Geometrical domain

It is clear that physical picture depends on geometrical positions of transmitter and receiver. So these objects should implement interfaces related to geometrical positions. Indeed these objects implement the `IPosition` interface. However this implementation is not explicit. Explicit implementation will be explained later. Since both transmitter and receiver implement `IPosition` interface we can construct more complicated picture: This picture means that Transmitter is rigidly linked to reference frame Frame 1 and Reciever, is rigidly linked to Frame 2. The C 1 and C 2 links represent objects of `ReferenceFrameArrow` type. The icon means `ReferenceFrameArrow` type and icon means object `FieldLink` type. So we already have multiple inheritance. The Transmitter object implements `IPhysicalField` and `IPosition` interfaces. The Receiver object implements `IFieldConsumer` and `IPosition` interfaces. So above picture contains multiple inheritance. However this fact does not seems as complicated. It is clear that we should link transmitter and receiver by link that means field interaction. Otherwise we should state geometrical positions to transmitter and receiver. Following picture represents properties of Frame 1: These properties have following meaning. Origin of Frame 1 has following absolute coordinates X=12.5, Y=-7.4, Z=5.6. Elements of transformation matrix of Frame 1 are represented at left part of above form. The C 1 link means that orientation and position of Transmitter coincides with orientation and position of Frame 1. Similarly orientation and position of Receiver coincide with orientation and position of Frame 2. The `ReferenceFrameArrow` has following properties:

• Source of `ReferenceFrameArrow` could be object that implement `IPosition` interface;
• Target of `ReferenceFrameArrow` could be object that implement `IReferenceFrame` interface;
• Object which implements `IPosition` interface could be source of single `ReferenceFrameArrow`;

Third property is reflection of following evident fact. One object could not be rigidly linked to two different reference frames. The framework supports hierarchy of frames with relative positions and orientations. Sample of the hierarchy is presented in following picture: Usage of relative positions and orientations is very convenient since it enables us to link virtual transmitter with virtual aircraft or virtual video camera with virtual car etc.

### 2.3 Domain of informational flow

It is clear that engineering framework should support moved reference frames. There are two ways for implementation of this picture. First one is development of type which implements `IReferenceFrame` interface and contains intrinsic motion support. Second way is export of motion law. Second way is more preferable since it have more opportunities. For example it enables us use Simulink support. Let us suppose that we would like implement following motion low of Transmitter.

X(t)=t;

Y(t)=t2;

Z(t)=t3;

Following picture represents this case: If Frame 1 is not moved then we can use object of type `RigidReferenceFrame`. Object of this type has icon. Now we would like make Frame 1 moved. We use object of `ReferenceFrameData` for this purpose. The icon corresponds to object of this type. What is difference between these to types? The `ReferenceFrameData` type implements `IDataConsumer` interface. It means that this object could consume external digital data. This data is used for definition of 6D trajectory of Frame 1. So we have second example of multiple inheritance. The `ReferenceFrameData` implerments both `IReferenceFrame` and `IReferenceFrame`. As `IReferenceFrame` it can be connected by `ReferenceFrameArrow` to any object which implements `IPosition` interface. However Frame 1 is linked to Motion law object by link with icon. This link correspond to object of `DataLink` type. Source of this link could be any object which implements `IDataConsumer` interface. Target of this link can be any object which implements `IMeasurements` interface. This link has following meaning. The object which implements `IMeasurements` interface provides virtual measurements. And object which is connected by `DataLink` object consumes these virtual measurements. Object linked to Frame 1 by D 1 link is Motion law. Its icon correspond to `VectorFormulaConsumer` type. This type implements `IMeasurements` interface. So object Motion law could be end of `DataLink` arrow. Properties of Motion law object are presented below It contains 5 expressions:

 Formula Expression Formula_1 t Formula_2 t2 Formula_3 t3 Formula_4 1 Formula_5 0

Properties of Frame 1 are presented below: These properties have following meaning. If we denote X(t), Y(t), Z(t) time dependencies of X, Y, Z coordinates of Frame 1 and Q0(t), Q1(t), Q2(t), Q3(t) of its orientation quaternion then we have following mapping:

 Motion parameter of Frame 1 Expression of Motion law X(t) Formula_1 Y(t) Formula_2 Z(t) Formula_3 Q0(t) Formula_4 Q1(t) Formula_5 Q2(t) Formula_5 Q3(t) Formula_5

In result we have:

X(t)=t;

Y(t)=t2;

Z(t)=t3;

Q0(t)=1;

Q1(t)=0;

Q2(t)=0;

Q3(t)=0;

So we have obtained required motion law of Frame 1

Import of external data can be used for different domains. It can be used for definition of physical field. The Transmitter has icon which correspond to `PhysicalFieldBase` type. As we already know this type implements `IPhysicalField` and `IPosition` interfaces. Besides them it also implements `IDataConsumer` interface. This interface used for import of physical field law. Following picture represents such import: The field physical law is imported from Formula object. This object has following proprerties: The Formula object contains three "constants" x, y, z and formula . The "constants" term is conditional. Indeed "constants" could be changed by external objects. Let us consider how Transmitter object uses this information. Editor of properties of Transmitter are presented below: This picture has following meaning. Relative coordinates X, Y, Z of point correspond to x, y, z "constants" of Formula object. Field value corresponds to Formula_1 Formula object. So we have model of field and Receiver object. We would like to use it. The Receiver object has icon which corresponds to `PhysicalFieldMeasurements3D`. This type implements following interfaces:

• `IFieldConsumer`
• `IPosition`
• `IMeasurements`

Meaning of first and second interface is already explained. Existence of `IMeasurements` means that object of `PhysicalFieldMeasurements3D` provides virtual measurement of field value. We already know that object which implements `IMeasurements` interface could be linked by `DataLink` to object which implements `IDataConsumer` interface. For example `VectorFormulaConsumer` implements this interface. We already know that `VectorFormulaConsumer` implements `IMeasurements` interface. This fact has following meaning. The `VectorFormulaConsumer` object consumes virtual measurements, thransformates them and provides new virtual measurements. Following picture represents consuming of virtual measurements of Receiver The Output component has type `VectorFormulaConsumer` and is connected by `DataLink` to Receiver as `IDataConsumer`. Properties of Output are presented below These properties have following meaning. The x corresponds to output value of Receiver object. Formula_1 of Output is square of output value of Receiver object. This text is my first attempt for explanation of multiple inheritance of the Framework. Next chapter contains more advanced example

### 2.4 Informational flow and transformations of 3D shapes

Data flow can be used by very different ways. For example it can be used for deformation of 3D shapes. Deformation scheme is presented below: We have initial (prototype) figure and math law of deformation. Every surface point of prototype is transformed by deformation law. Sample of implementation is presented below: The Initial shape object is 3D red cube. The Deformation law is object which contains math law of deformation. Properties of Deformation law are presented below: The law has following meaning. Coordinates x', y', z' of deformed figure surface depend on coordinates x, y, z of point of prototype surface by following way:

x' = ax + by + cz;

y' = dx + fy + gz;

z' = hx + iy + jz.

Properties of Deformed cube object are presented below: These properties have following meaning. Variables x, y, z of Deformation law correspond to coordinates x, y, z of Initial shape surface coordinates. Output Formula_1, Formula_2, Formula_3 correspond to surface coordinates of Deformed cube. The Deformed cube object is begin of DF link. Otherwise DF link has `DataLink` type. So type Deformed cube should implement `IDataConsumer` interface. The Deformed cube object has `DeformedWpfShape` and this type implements `IDataConsumer` interface. Really Deformed cube consumes data from Deformation law object. The `DeformedWpfShape` type implements several other interfaces. So we have another good example of multiple inheritance.

## 3. 2D to 3D Conversion

The "2D to 3D Conversion" theme is well known. I have no interest to it. But I find that it is good sample for explanation. Here one untypical scheme of 2D to 3D conversion is considered. First of all this scheme contains definition of position of cameras. Secondly this scheme assumes manual work. The idea is presented on following picture: User marks 2D images by points. One of images is presented below: Usage of this markup enables us to define 3D positions of points.

### 3.1 Math Background

There is well known that n parameters could be defined by m measured parameters if and only if m>n (or m=n). So if we would like to define 3D positions of N points than we need 3N or more measured parameters. Every photo provides 2 measurements for every 3D point. These measurements are X and Y coordinates of pixel which correspond to point. So one photo provides 2N measured parameters. This number is less then 3N so one photo is not enough for 3D reconstruction. However 2 photos provide 4N measured parameters. Since 4N > 3N two photos provides enough information for 3D reconstruction. Let us consider situation where position or orientation of cameras is not known. Position and orientation of camera is defined by 6 parameters. If we have k cameras and we would like to define positions of N points and positions and orientations of cameras. In this situation we have 2kN measured parameters and we would like to define 3N + 6k parameters. This estimation is (in principle) possible if 2kN > 3N + 6k. But situation is not such easy. We should take to account observability issues. According observability theory all positions of cameras could not be defined. This fact can be explained by following way. If we perform equal 6D motion of all objects (cameras and points) then pictures on photos would not be changed. It is very close to explanation of Galileo's Principle of Inertia. So positions of all cameras and all points could not be estimated. However if we would like to define position of k - 1 cameras then task have solution. Following text contains solution of problem

### 3.2 Implementation

#### 3.2.1 Positions of cameras

There is nonlinear dependence of positions of 2D pixels on position and orientation of camera. So nonlinear regression is used for definition of these parameters. We use non-linear least squares method for our task. However this method requires initial parameter estimates. We suppose that we have good initial estimates of these parameters. Uncertain 6D position of camera is decomposed to nominal position and deviation as it it is presented below: We have four reference frames Base, Shift, Rotation X, Rotation X, Rotation Y, Rotation Z. The Base frame corresponds to nominal 6D position of frame. The Shift is linearly shifted with respect to Base. Frames Rotation X, Rotation Y, Rotation Z contains sequence of rotations with respect to X, Y and Z axes. Relative geometrical positions of theses frames are presented below: The Base is the frame with continuous axes. The Shift frame is presented by lines which have following template . Axes Rotation X, Rotation X, Rotation Y, Rotation Z have following templates respectively . The Shift plane is parallel to Base and is shifted with respect to Base. The values of X, Y and Z shift are equal to a, b and X respectively. The Rotation X is obtained from Shifted by rotation with respect to X axis. Similarly the Rotation Y is obtained from Rotation X by rotation with respect to Y axis and the Rotation Z is obtained from Rotation Y by rotation with respect to Z axis. Rotation angles are equal to d, g, f respectively. Let us explain meaning of all objects of above diagram. The Const object contains constants "0" and "1". Properties of this object are presented below: These constants are used in quaternions. The Coordinates object contains a, b, c, d, f, g constants. As it is noted first three constants correspond to shifts and other constants are rotation frames. The Shift properties are presented below: These properties have following meaning. Relative coordinates of Shift are respectively equal to Formula_1, Formula_2, Formula_3 of Coordinates object. Otherwise Coordinates object has following properties: .

It means that values of Formula_1, Formula_2, Formula_3 are respectively equal to constants a. Therefore relative coordinates of Shift frame are equal to a, bc of Coordinates object. Since Formula_1, Formula_2 of Const object are equal to "0" and "1" components of orientation quaternion of Shift are equal to following values Q0=1, Q1=0, Q2=0, Q3=0. This quaternion corresponds to trivial rotation. So Shift is parallel to Base The Trigo X contains trigonometric functions: These functions are being used for orientation quaternion of Rotation X object. Properties of Rotation X are presented below: So Formula_1, Formula_2 of Trigo X are respectively equal to Q0, Q1 components of orientaton quaternion of Rotation X. Objects Trigo Y and Trigo Z are analogues of Trigo X. So Rotation Y and Rotation Z are analogues of Rotation X. Our situation contains a set of virtual cameras. So we should create a set of complicated structures of positions. Result picture would be very complicated. But framework supports aggregation of objects. Following picture represents aggregate designer. Checked boxes correspond to visible objects. Other objects are encapsulated. So visible objets are following objects:

• Shift;
• Rotation Z;
• Coordinates;

Visibility of these objects caused by following reasons. The Shift frame is installed on nominal position of camera. Otherwise the camera is installed on Rotation Z frame. The Coordinates is visible because we would like to control deviations of coordinates of camera. In result we have following aggregated object: #### 3.2.2 Multiple inheritance once again. Virtual camera as object of data flow

The main purpose of virtual video camera is presentation and animation of 3D objects. However video camera could be considered as object of data flow. Camera performs transformation of 3D space point to 2D point of screen.

```    ///<summary> Transformer of objects</summary>
public interface IObjectTransformer
{

///</summary> Input variables </summary>
string[] Input { get; }

/// <summary>Output variables </summary>
string[] Output { get; }

/// <summary>Gets type of i th input variabe</summary>
/// <param name="i" />Variable index </param>
/// <returns>The type</returns>
object GetInputType(int i);

/// <summary>Gets type of i th output variable </summary>
/// <param name="i" />Variable index </param>
/// <returns>The type</returns>
object GetOutputType(int i);

/// <summary>Calculation </summary>
/// <param name="input" /> Input </param>
/// <param name="output" />Output </param>
void Calculate(object[] input, object[] output);
}
```

The key function of this interface is `Calculate`. This function transforms input to output. The WPF implementation of virtual video camera implements `IObjectTransformer`

```/// <summary> WPF implementation of virtual video camera </summary>
[Serializable()]
public class WpfCamera : Motion6D.Camera, ISerializable, IUpdatableObject, IObjectTransformer
```

Following snippet contains implementation of this interface by `WpfCamera`:

```         /// <summary>Calculation</summary>
/// <param name="input" />Input  /// </param>
/// <param name="output" />Output /// </param>
void IObjectTransformer.Calculate(object[] input, object[] output)
{
for (int i = 0; i < 3; i++)
{
inpos[i] = (double)input[i];
}
double w = (double)input / (2 * sin);

// 3D to 3D transformation
BaseFrame.GetRelativePosition(inpos, outpos);

// 3D to 2D transformation double x = Math.Atan2(outpos, -outpos) * w;
double y = Math.Atan2(outpos, -outpos) * w;
output = x;
output = y;
}
```

In fact above function performs two transformation. First on is 3D to 3D transformation. Coordinates of common (base) reference frame are transformed to coordinates of reference frame which is rigidly linked to the camera. Second transformation is 3D to 2D perspective projection from 3D space to 2D screen. This projection uses inverse cotangent. Following picture explains usage of inverse cotangent: #### 3.2.3 Usage of transformers

Transformers of objects could be used by different ways. For example they could be used as two types of objets `IDataConsumer` and `IMeasurements`. Following scheme represents this usage. We have Camera object. The type of Camera is `WpfCamera`. We know that this object implements `IObjectTransformer`. Framework supports link between objects which implement `IObjectTransformer` and `IObjectTransformerConsumer` interfaces. The L arrow links Transformation object as `IObjectTransformer` and Camera as `IObjectTransformer`. The Transformation object has `ObjectTransformer` type. Following listing represents head of this class:

```     /// <summary>
/// Transformer of objects
/// </summary>
[Serializable()]
public class ObjectTransformer : CategoryObject, ISerializable,
IDataConsumer, IMeasurements, IPostSetArrow, IObjectTransformerConsumer
```

This type implements reqired `IObjectTransformerConsumer` interface. Moreover it implements `IDataConsumer` and `IMeasurements` interfaces. In above picture Transformation object is linked to Input as `IDataConsumer`. Properties of Transformation object are presented below: .

These properties mean following mapping between outputs of Input object and inputs of Transformation object

 Outputs of Input object Inputs of Transformation object 1 Formula_1 X 2 Formula_2 Y 3 Formula_3 Z 4 Formula_4 Width

Let us explain meaning of these parameters. First of all remind that any `WpfCamera` can transform 3D space point to 2D screen point. The X, Y, Z are 3D coordinates of space point. The Width is a width of screen. Following picture represents properties of Input object: Properties of Output object are presented below: So output of Transformation contains two double variables X and Y. However `ObjectTransformer` object is case sensitive. Input variables X, Y, Z should not have `Double` type only. These variables could be replaced by `Double` arrays as it is presented below: Here X, Y, Z have "type" `Double`. But now output variables X and Y of Transformation object become `Double` as it is presented below: These output arrays are obtained by componentwise calculation of data contained in input arrays.

#### 3.2.4 Full simulation picture

Before description of the full algorithm let us consider helper algorithm which is presented below: This picture contains 3 images of airplane and 11 red cubes. Every cube have 2 coordinates on every image. Following picture presents these points on chart: Top part of the picture contains imagers of cubes obtained from virtual camera. Bottom part contains coordinates of these cubes (The Y axis is inverted)

Full simulation picture is presented below: The Plane is 3D model of airplane. Objects Camera 1, Camera 2 and Camera 3 are virtual cameras. The cameras are linked to Plane by visibility links . It means that cameras indicate Plane. The Position 1, Position 2 and Position 3 are nominal positions of Camera 1, Camera 2 and Camera 3 respectively. These 6D positions are base frames for aggregates Shift 1, Shift 2 and Shift 3 which simulate 6D deviations of positions of cameras. Aggregates are described in 3.2.1. Charts C 1, C 2, C 3 contain nominal 2D positions of points near plane. Charts C 1, C 2 and C 3 correspond to Camera 1, Camera 2 and Camera 3 respectively. Ordinates of charts X, Y and Z are arrays of 3D X, Y and Z coordinates of points. The Points iterator object is object of nonlinear regression component. Its properties are presented below: Right panel contains parameters which we would like to define. These parameters are ordinates of X, Y and Z objects. Middle and right panels contains matching map. The mapping is presented in following table:

 Number Matching array Selection 0 Parameter X of Trans 1 Selection X of C 1 1 Parameter Y of Trans 1 Selection Y of C 1 2 Parameter X of Trans 2 Selection X of C 2 3 Parameter Y of Trans 2 Selection Y of C 2 4 Parameter X of Trans 3 Selection X of C 3 5 Parameter Y of Trans 3 Selection Y of C 3

How do parameters in middle column depend on ordinates of X, Y and Z objects. Roughly speaking parameters in middle column are result of 3D to 2D transformation obtained by cameras. In fact full algorithm includes 3D to 2D transformation. However regression element performs solution of inverse task "2D to 3D". Let us consider 3D to 2D transformation. The Trans 1 object is connected to Camera 1 as `IObjectTransformerConsumer`. So Trans 1 performs 3D to 2D transformation which correspond to Camera 1. Otherwise the Trans 1 object is coonected to X, Y and Z objects as `IDataConsumer`. Propreties of Trans 1 are presented below: These properties mean that input parameters X, Y, Z of Trans 1 match to ordinates of X, Y and Z objects.

The Full iterator object performs same match as Points iterator. However defined parametes are extended by 6D positions of Camera 2 and Camera 3. According observability theory issues (see 3.1) 6D position of Camera 1 is not included. Properties of Full iterator are presented below: Left panel on this picture contains parameter Shift 2/Coordinates.a. This parameter has following meaning. There is aggregate Shift 2 which simulates deviation of Camera 2. The aggregate contains Coordinates which has a, b, c, d, f, g constants (See 3.2.1). These constants are 6D deviation of position of camera. So Shift 2/Coordinates.a is one deviation parameter of position of Camera 2. The Full iterator contains all deviation parameters of Camera 2 and Camera 3.

## 4.    2D to 3D Conversion Application

The universal framework is very powerful. So it is not easy use it by a lot of people. However the framework has a lot of relatively simple subversions. These subversions are more specialized and therefore more simple. Here the 2D to 3D transformation version is considered. We have already constructed algorithm. This algorithm is saved to file `FullAgorithm.cfa`. Now we would like to use it as computational resource. The file is used as resource in our application: ### 4.1 Application Outlook

The application outlook is presented on following picture: The main window is MDI window. Child window is presented below: User marks points near plane on three children windows with properties of cameras

The application also contains a control window: Control window contains two buttons. First button correspond to definition of 3D positions of points only. Second one correspond to definition of 3D positions of points and 6D positions of cameras. Second tab page of this window represents 6D deviation of cameras.

Business logic contains different manipulations with computational resource stored in resource file `FullAgorithm.cfa`. First of all we transform this resource to object by following way:

```         /// <summary>
/// Desktop object
/// </summary>
static PureDesktopPeer Desktop
{
get
{
// Construction of object
PureDesktopPeer desktop = new PureDesktopPeer();

return desktop;
}
}
```

So we have object of `PureDesktopPeer` which corresopnd to following picture: This object contains objects which corresopnd to squares on above picture. Following snippet shows access to C 1 object.

```             // Gets object from desktop. Object name is "C 1"
object o = desktop.GetObject("C 1");

// It is known that type of o is DataPerformer.Series. So we can cast the object to DataPerformer.Series
DataPerformer.Series s = o as DataPerformer.Series;

// Clears all points of series
s.Clear();

// Coordinates of new points
double[,] a = new double[,]{{1.5, 9}, {2, 8.4}};
for (int i = 0; i < 2; i++)
{
// Adding new points to series
}
```

This snippet is not useful for business logic. But it is very clear and so it is useful for explanation. Useful business logic is considered below. It includes data input, business actions and output.

#### 4.2.1 Input

Input of coordinates of 2D points of cameras is performed by the following way:
```         /// <summary>
/// Processing of camera points
/// </summary>
/// <param name="k">Nubmer of camera</param>
/// <param name="picture">The picture object</param>
void ProcessPoints(int k, Picture picture)
{
// Gets series which corresopnds to k - th camera.
DataPerformer.Series s = desktop.GetObject("C " + k) as DataPerformer.Series;

// Clears all points of series
s.Clear();
List<int> l = new List<int>();
Dictionary<int, int[]> d = picture.Points;
l.Sort();
foreach (int i in l)
{
int[] p = d[i];
// Adding new points to series
}
}
```

So coordinates of points marked by user are linked to corresponding objects. Coordinates of points are marked by mouse. Red crosses in following picture correspond to these points. However besides these points the application supports input of nominal 6D positions of cameras. User interface for this input is presented below: This interface enables us to input coordinates and transition matrix of camera. Let us consider this operation in business logic.

```         /// <summary>
/// Process reference frame
/// </summary>
/// <param name="k">Number of frame</param>
/// <param name="picture">The picture object</param>
void ProcessFrame(int k, Picture picture)
{
// The frame
RigidReferenceFrame frame = desktop.GetObject("Position " + k) as RigidReferenceFrame;

// Coordinates
double[] p = picture.Coordinates;

// Position of frame
double[] pos = frame.RelativePosition;
for (int i = 0; i < 3; i++)
{
pos[i] = p[i];
}

// Transition matrix
double[,] m = picture.Matrix;
double[,] mat = frame.RelativeMatrix;
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; j++)
{
mat[i, j] = m[i, j];
}
}
}
```

Main business action is usage of nonlinear regression. This action is presented on following snippet:

```         /// <summary>
/// Performs regression
/// </summary>
/// <param name="processorName"<Name of regression processor</param<
void Iterate(string processorName)
{
Prepare();
// Regression processor
AliasRegression reg = desktop.GetObject(processorName) as AliasRegression;

// Iteration
reg.FullIterate();
}

/// <summary>
/// Iterates points only
/// </summary>
internal void IteratePoints()
{
Iterate("Points iterator");
}

/// <summary>
/// Full iteration
/// </summary>
internal void FullIterate()
{
Iterate("Full iterator");
}
```

Roughly speaking business logic contains call of `FullIterate` funtcion of Points iterator and Points iterator objects.

#### 4.2.3 Output

Output contains two operations. First one is output of coordinates of 3D space points. These coordinates are contained in X, Y and Z series. Following snipped shows access to these objects and extraction of coordinates from them.

```         /// <summary>
/// Names of coordinate objects
/// </summary>
private static readonly string[] xyz = new string[] { "X", "Y", "Z" };

/// <summary>
/// </summary>
internal double[][] Coordinates
{
get
{
List<double[]> l = new List<double[]>();
List<DataPerformer.Series> ls = new List<DataPerformer.Series>();

// Seies
foreach (string s in xyz)
{
DataPerformer.Series ser = desktop.GetObject(s) as DataPerformer.Series;
}
int n = ls.Count;
for (int i = 0; i < n; i++)
{
double[] p = new double;
for (int j = 0; j < 3; j++)
{
DataPerformer.Series ser = ls[j];
p[j] = ser[i, 1];
}
}
// Transformation from series to array
return l.ToArray();
}
}
```

Second output operation is extraction of deviations of 6D positions of Camera 2 and Camera 3. These positions are contained in aggregates Shift 2 and Shift 3. Every aggregate contains the Coordinates object which implement `IAlias` interface. Deviations are accessible through this interface.

```        /// <summary>
/// Names of aliases variables
/// </summary>
private static readonly string[] pc = new string[] { "a", "b", "c", "d", "f", "g" };

/// <summary>
/// </summary>
internal double[][] Aliases
{
get
{
// Two  objects which contains information about deviation of "Camera 2" and "Camera 3"
IAlias[] al = new IAlias;
for (int i = 0; i < 2; i++)
{
// Names of objects
string s = "Shift " + (i + 2) + "/Coordinates";

// Objects
al[i] = desktop.GetObject(s) as IAlias;
}

// Parameters of deviation of "Camera 2" and "Camera 3"
double[][] x = new double[];
for (int i = 0; i < 2; i++)
{
// Alias object whih correspond to 6D camera position deviation
IAlias a = al[i];
double[] y = new double;
for (int j = 0; j < 6; j++)
{
// Reads parameter from deviation object
y[j] = (double)a[pc[j]];
}
x[i] = y;
}
return x;
}
}
```

## Points of Interest

2D to 3D transformation is not my urgent problem. However I find that I should have more good explanation of the framework. Main purpose of this article is explanation of usage of multiple inheritance in the framework.

## History

My early articles have more advertisement than explanation of code. Recently some people told me about interest to the framework. So my latest articles are more devoted to code.