All posts by admin

Reusable date range validator

Note: This post describes my developer experience, the step I have to take to make it work. I am not describing the final result but also the problems I have experienced and how I dealt with them. In the development process, the issues are snowballing and spiralling out of control. Also, this is a follow-up post to the previous post.

The validator validates catalogue object has properties – validFrom and validTo – to make it valid only in this date range. None of these properties is required so that the catalogue can be active from the specific day forward or till some date. Or forever. But the date range has to be valid date range and end of validity has to be after the start of validity. The date range is also validated only when the catalogue is active. That information is stored in isActive property, named initially isEnabled but renamed later.

The validator is implemented on the server-side using a custom validator and added to properties of the input type as @Validate(DateFromToValidator) decorator.

@InputType()
export class CatalogInput {
  ...
  @Validate(DateFromToValidator)
  @Field({ nullable: true })
  validFrom?: Date;
  @Validate(DateFromToValidator)
  @Field({ nullable: true })
  validTo?: Date;
  @Field()
  isActive: boolean;
}

The of the validator from the previous post

@ValidatorConstraint({ name: "dateFromToValidator", async: false })
export class DateFromToValidator implements ValidatorConstraintInterface {
  validate(value: string, args: ValidationArguments) {
    const catalog = args.object as CatalogInput;
    if (!catalog.isEnabled) {
      return true;
    }

    return catalog.validFrom <= catalog.validTo;
  }

  defaultMessage(args: ValidationArguments) {
    return "Text is too short or too long!";
  }
}

You may have noticed that the DateFromToValidator contains bugs.

  1. it does not work when only one of the range values is provided
  2. the error message is non-sensical
  3. It has to split into two constraints to show a proper validation message for each property
  4. The validator is not reusable

It happened during the development while dealing with the other snowballed issues and I have completely forgotten about that.

@ValidatorConstraint({ name: "dateFromValidator", async: false })
export class DateFromValidator implements ValidatorConstraintInterface {
  validate(value: string, args: ValidationArguments) {
    const catalog = args.object as CatalogInput;
    if (!catalog.isEnabled) {
      return true;
    }

    return catalog.validFrom <= catalog.validTo;
  }

  defaultMessage(args: ValidationArguments) {
    return "Text is too short or too long!";
  }
}

Reusability

The steps to make the validator reusable was an introduction of a new interface IHasValidityRestriction and getting rid of the dependency on the value of isEnabled property.

export interface IHasValidityRestriction {
  validFrom?: Date;
  validTo?: Date;
}

Then any class implementing this interface can be validated:

@ValidatorConstraint({ name: "dateRangeValidator", async: false })
export class DateRangeValidator implements ValidatorConstraintInterface {
  constructor(private message: string) {}

  validate(value: string, args: ValidationArguments) {
    const o = args.object as IHasValidityRestriction;
    // only when validFrom and validTo values has been supplied
    if (o.validFrom && o.validTo) { 
      return o.validFrom <= o.validTo;
    }

    return true;
  }

  defaultMessage(args: ValidationArguments) { return this.message; }
}

The class-validator library has a neat conditional decorator ValidateIf that can ignore the validators on a property when the provided condition function returns false. In this case, the date range is validated when isEnabled is true.

Decorators

I have also created two decorators to validate validFrom and validTo properties; each of them has a different constraint violation message.

export function DateRangeStart(validationOptions?: ValidationOptions) {
  return function(object: Object, propertyName: string) {
    registerDecorator({
      name: "dateRangeStartValidator",
      target: object.constructor,
      propertyName: propertyName,
      constraints: [],
      options: validationOptions,
      validator: new DateRangeValidator(
        `${propertyName} must be before date range end`
      )
    });
  };
}

export function DateRangeEnd(validationOptions?: ValidationOptions) {
  return function(object: Object, propertyName: string) {
    registerDecorator({
      name: "dateRangeEndValidator",
      target: object.constructor,
      propertyName: propertyName,
      constraints: [],
      options: validationOptions,
      validator: new DateRangeValidator(
        `${propertyName} must be after date range start`
      )
    });
  };
}

Result

And this is the result when everything is put together:

@InputType()
export class CatalogInput implements IHasValidityRestriction {
  @ValidateIf(o => o.isEnabled)
  @DateRangeStart()
  @Field({ nullable: true })
  validFrom?: Date;
  
  @ValidateIf(o => o.isEnabled)
  @DateRangeEnd()
  @Field({ nullable: true })
  validTo?: Date;
  
  @Field()
  isEnabled: boolean;
}

Notes

I was trying to improve the UX and disable the form submit button when there is a field with an invalid value, but this does not work in this case. While I can change the dates for the date range to be valid, the fields remain invalid until the next server validation.

<Button
  type="primary"
  disabled={hasError}
  htmlType="submit">
  Submit
</Button>

And the hasError value is detected from the form itself.

const fieldErrors = form.getFieldsError();
const hasError = Object.keys(fieldErrors).find(p => fieldErrors[p] !== undefined) !== undefined;

The (again) ugly fix was to reset the errors for the date range on submitting explicitly.

handleSubmit = async (catalog: ICatalog, form: WrappedFormUtils<any>) => {
  ...
  form.setFields({
    validFrom: { value: form.getFieldValue("validFrom") },
    validTo: { value: form.getFieldValue("validTo") }
  });
  ...
}

Is my code going to be full of ugly hacks? I certainly hope note. Some might still argue, but a much better fix is to reset the field errors when the date range fields change. The handler has to reset input field errors for both input fields because the validator invalidates both. And the onChange handler has been added to both input fields.

<DatePicker
  onChange={() =>
    form.setFields({
      validTo: { value: form.getFieldValue("validTo") },
      validFrom: { value: form.getFieldValue("validFrom") }
    })
  } />

Final thoughts

Developers who know how similar controls work would certainly avoid half of the discusses problems and use the DatePicker disabledDate property to limit the start and end dates. Using it will improve the UX on the client-side, but data provided by the user has to be validated.

{getFieldDecorator("validTo", {
  initialValue: momentFunc(catalog.validTo) 
})(<DatePicker
  onChange={() =>
    form.setFields({
      validTo: { value: form.getFieldValue("validTo") },
      validFrom: { value: form.getFieldValue("validFrom") }
    })  
  } 
  disabledDate={this.disabledValidTo}
/>)}
disabledValidTo = (validTo?: moment.Moment): boolean => {
  const validFrom = this.props.form.getFieldValue("validFrom");
  if (validTo && validFrom) {
    return validTo.valueOf() <= validFrom.valueOf();
  }
  return false;
};

Communicating server-side input validation failures with GraphQL and ant-design form

Note: This post describes my developer experience, the step I have to take to make it work. I am not describing the final result but also the problems I have experienced and how I dealt with them. In the development process, the issues are snowballing and spiralling out of control.

When I have chosen a completely different programming language (TypeScript) and server-side API approach (GraphQL) and different React UI library (ant-design) for the development of my application, I did not know how it will slow me down. Every new feature I wanted to implement (including the most basic ones) resulted in spend some time researching using Google, StackOverflow and GitHub pages. This time it was no different – server-side validation and communicating the input validation failure to the user.

The form

The catalog object has two properties – validFrom and validTo – to make it valid only in this date range. None of these properties is required so that the catalog can be active from the specific day forward or till some date. Or forever. But the date range has to be valid date range and end of validity has to be after the start of validity.

The validator is implemented on the server-side using a custom validator and added to properties of the input type as @Validate(DateFromToValidator) decorator.

@InputType()
export class CatalogInput {
  @Field({ nullable: true })
  id?: number;
  @Field()
  name: string;
  @MaxLength(255)
  @Field({ nullable: true })
  description?: string;
  @Validate(DateFromToValidator)
  @Field({ nullable: true })
  validFrom?: Date;
  @Validate(DateFromToValidator)
  @Field({ nullable: true })
  validTo?: Date;
  @Field()
  isPublic: boolean;
  @Field()
  isEnabled: boolean;
}

The library is using class-validator and I can create a custom validator constraint implementing ValidatorConstraintInterface interface. The interface has two methods

  • validate that should return true when everything is ok and
  • defaultMessage to return the error message.

The code of the validation constraint (not reusable) to validate validFrom and validTo date range is as follows:

@ValidatorConstraint({ name: "dateFromToValidator", async: false })
export class DateFromToValidator implements ValidatorConstraintInterface {
  validate(value: string, args: ValidationArguments) {
    const catalog = args.object as CatalogInput;
    if (!catalog.isEnabled) {
      return true;
    }

    return catalog.validFrom <= catalog.validTo;
  }

  defaultMessage(args: ValidationArguments) {
    return "Text is too short or too long!";
  }
}

You can argue that I could have implemented the validation on the client. Still, the golden rule of API design is never to trust the values provided by the user and always validate the data on the server-side. While this may improve the UX, it also means to have two implementations of the validator.

You may also say that is not a problem since the code can work on both browser and server-side and you may be right. But for now, let’s leave like it is πŸ™‚ I will change it in the future.

When the validation fails on server-side, GraphQL API returns a response with errors array indicating a problem. The response has a 200 HTTP status code, unlike REST. The error response is well structured and contains some additional information like stack trace. The Apollo server introduced standardized errors where additional details are provided in the extensions map 12.

{
  "errors": [
    {
      "message": "Argument Validation Error!",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "updateCatalog"
      ],
      "extensions": {
        "code": "BAD_USER_INPUT",
        "exception": {
          "validationErrors": [
            {
              "target": {
                "id": 2,
                "name": "Name",
                "description": "Description",
                "validFrom": "2019-12-19T11:42:31.972Z",
                "validTo": null,
                "isPublic": false,
                "isEnabled": true
              },
              "value": "2019-12-19T11:42:31.972Z",
              "property": "dateFromToValidator",
              "children": [],
              "constraints": {
                "catalogValidity": "Text is too short or too long!"
              }
            }
          ],
          "stacktrace": [
            "UserInputError: Argument Validation Error!",
            "    at Object.exports.translateError (C:\\Work\\playground\\app\\server\\src\\modules\\shared\\ErrorInterceptor.ts:69:11)",
            "    at C:\\Work\\playground\\app\\server\\src\\modules\\shared\\ErrorInterceptor.ts:29:13",
            "    at Generator.throw (<anonymous>)",
            "    at rejected (C:\\Work\\playground\\app\\server\\src\\modules\\shared\\ErrorInterceptor.ts:6:65)",
            "    at process._tickCallback (internal/process/next_tick.js:68:7)"
          ]
        }
      }
    }
  ],
  "data": null
}

To display the validation errors in the ant-design form, I have to convert the GraphQL error response to an object that can be passed to setFields method of the form. The signature of the method is setFields(obj: Object): void; which is not very helpful. The search on ant-design GitHub pages showed that the object passed must have the same properties as the edited object. Each member is another object with a property value with the edited object’ value and optional property errors containing the error(s) to be displayed.

const formFields: { [property: string]: { value: any, errors?: Error[] } } = {};

The failed attempt to mutate the data throws an exception – the response is rejected in Apollo. The error handler is a switch with cases for possible error codes. Only user input errors are handled here (BAD_USER_INPUT extension code).

try {
  await this.props.client.mutate<CatalogData>({
    mutation: UPDATE_CATALOG,
    variables: {
      catalog
    }
  });
} catch (e) {
  const apolloError = e as ApolloError;
  if (apolloError) {
    apolloError.graphQLErrors.forEach(apolloError => {
      const code = apolloError.extensions.code;
      switch (code) {
        case "BAD_USER_INPUT":
          const validationErrors = apolloError.extensions.exception.validationErrors;
          const formFields: { [property: string]: any } = {};
          validationErrors.forEach(
            (validationError: ApolloValidationError) => {
              const {
                target,
                property,
                constraints,
                value
              } = validationError;

              const errors = [];
              for (const key in constraints) {
                const value = constraints[key];
                errors.push(new Error(value));
              }

              formFields[property] = {
                value,
                errors
              };
            }
          );
          setTimeout(() => {
            form.setFields(formFields);
          }, 500);
          break;
        default:
          this.handleError(e);
          break;
      }
    });
  }
}

And this object will passed into setFields method:

{
  "validFrom": {
    "value": "2019-12-18T12:31:42.487Z",
    "errors": [
      Error("Text is too short or too long!")
    ]
  },
}

This code does not work – the DatePicker control expects a value of moment type, and it gets string instead. This attempt ends in warnings being written to a console and an exception thrown:

Warning: Failed prop type: Invalid prop `value` of type `string` supplied to `CalendarMixinWrapper`, expected `object`.
Warning: Failed prop type: Invalid prop `value` supplied to `Picker`.
TypeError: value.format is not a function

When these input fields are rendered the value provided is explicitly converted from Date to moment instance:

const momentFunc = (value?: Date) => moment(value).isValid() ? moment(value) : null;

<Form.Item label="Platnost do">
  <Row>
    <Col>
      {getFieldDecorator("validTo", { initialValue: momentFunc(catalog.validTo) })
        (<DatePicker locale={locale} />)
      }
    </Col>
  </Row>
</Form.Item>

The form is re-rendered, but the initialiValue is not recalculated.

The quick and ugly hack is to convert the string values representing dates into moment instances. It is ugly because I have to list the names of properties holding date values. Also, I can’t use this as a general solution:

const momentFunc = (value?: string) => moment(value).isValid() ? moment(value) : null;

let v = value;
if (key === "validTo" || key === "validFrom") { 
  v = momentFunc(value);
}

const errors = [];
for (const key in constraints) {
  const value = constraints[key];
  errors.push(new Error(value));
}

formFields[property] = { value: v, errors };

This works now with a minor problem – the validFrom input field is cleared on form submit and comes back with the validation failure message. Oops, I have been accidentally calling form.resetFields method in the submit handler.

Types exist only at compile time

Even though TypeScript brings optional static type-checking, it is only performed at compile time. The type information (the metadata) does not become a part of the compiled JavaScript code (unlike in C# or Java). However, the code elements can be extended with decorators to provide information in runtime. 3

Good news! I have already decorators in place – I have decorated the properties of CatalogInput class. This type is used for updating (mutating) the catalog instance through the GraphQL API. Bad news – this class is located in the server project which is parallel to the client project πŸ™ I will write a separate post on this once I figure out how to achieve code sharing between client and server.

For now the ugly hack is to try to convert given value to a moment instance:

const isValidDate = (dateObject: string) => new Date(dateObject).toString() !== "Invalid Date";

let v = value;
if (isValidDate(v)) {
  v = momentFunc(value);
}

const errors = [];
for (const key in constraints) {
  const value = constraints[key];
  errors.push(new Error(value));
}

formFields[property] = { value: v, errors };

A much better and robust solution would be to use the type information from the decorators on client-side or extend the GraphQL API response with type information (again, from member decorator on server-side).

Getting started with Selenium the easier way

This aim of the project that is available on my Github, is to simplify the initial phase of the development of the automated end-to-end tests created using Selenium WebDriver.

Before you can start using Selenium, you have to get the driver for your browser and

  • set the path to that binary System.setProperty("webdriver.gecko.driver", "path")
  • know the property name, here webdriver.gecko.driver.
  • Example of manual setup can be found here.

The good

  • Everything is straightforward, and you know where everything is.

The bad

  • It is annoying.
  • You are hardcoding path to the location of the driver in the source code;
    • Imagine you are on Windows and you share the code with somebody who is using Linux or macOS.
    • Or the system running the code is using a completely different folder structure (for example in the CI pipeline).
  • You must check whether a new driver was released.
  • You must download a different version of the driver when you upgrade your browser, and the driver is no longer compatible with your browser.
  • You will not be able to run tests in various browsers without refactoring.

This project is hiding this away by employing the power of WebDriverManager project. The WDM project automatically detects the type and version of the browser you have installed on your system and download the appropriate driver automatically. See WDM project examples for more details.

All the necessary setup is reduced to

WebDriverManager.firefoxdriver().setup();

FirefoxOptions options = new FirefoxOptions();
options.merge(capabilities);
options.setHeadless(HEADLESS);

return new FirefoxDriver(options);

Execution

Tests can be run inside the IDE or from the command line using mvn clean test.

Configuration

The project is using the WDM project and the code I discovered recently with a little bit of configuration. Then the tests can be executed in different browsers.

The execution can be configured with system properties.

mvn test -Dbrowser=firefox -Dheadless=true

Or with a property file .properties located in the current context directory – for maven test execution the directory is /mastering-selenium-testng/target/test-classes. See execution log for details.

When system properties are used, they override the values from the properties file.

When property value is not provided the default value is used:

browser=chrome
headless=true
remote=false
seleniumGridURL=
platform=
browserVersion=

The configuration can change

  • browser type (chrome|firefox|edge|ie|safari|opera)
  • headless mode (true|false)
  • remote execution

Or create run configuration for JUnit and set the “VM options”:
Run configuration

Libraries used

Notes

Execution with Opera browser was not tested.

When running tests in Safari, you may get the following error

[ERROR] com.masteringselenium.tests.TodoMvcTests  Time elapsed: 0.994 s  <<< ERROR!
org.openqa.selenium.SessionNotCreatedException:
Could not create a session: You must enable the 'Allow Remote Automation' option in Safari's Develop menu to control Safari via WebDriver.
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: '...', ip: '...', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.14.6', java.version: '10.0.1'
Driver info: driver.version: SafariDriver

The solution as mentioned in the error is to enable Allow Remote Automation option in Safari’s Develop menu, for more details see Testing with WebDriver in Safari page.

Grab the code at https://github.com/kfrajtak/selenium-starter-kit and spend more time on testing than on setting the infrastructure.

TypeScript, GraphQL, Sequelize … OMG

For my new project I decided to use TypeScript in my new project instead. I got fed up with all those unexpected JavaScript run-time errors occurring in my previous projects. It was not a very pleasant journey – so many new things, steep learning curve, new syntax, but you get the type safety, which is nice.

All my previous project were using a REST API. I decided to give GraphQL a try. It was not a very pleasant journey – so many new things, steep learning curve, new syntax, Apollo, variables, no HTTP status codes to detect errors, etc., but you get the playground, which is nice.

Right now I doubt the benefits of using it. That was true when I started writing this post.

The basic idea is not to reinvent the wheel, so I chose graphql-modules NPM package for “easier” adoption. The library encourages to create reusable modules encapsulating functionality, has dependency injection support, and the modules should be testable (do not know, I haven’t got there).

The most important part is the resolver which is a function to resolve value for a type of field in a schema, it will start at the root object that is being returned by the query and resolve all the required fields. This is one of the features of GraphQL – you get what you want, not less, not more. If you do not ask for the field, it will not appear in the result (compare that with REST). The fancy terms are under-fetching and over-fetching.

But, back to code, I have two entities User with many Roles. In User model the association is defined as

@BelongsToMany(() => Role, () => UserRole) roles: Role[];

Auto-generated schema from these types

@ObjectType()          
export class User {
  @Field(type => ID) id: number; 
  @Field() firstName: string;
  @Field() lastName: string;
  @Field() emailAddress: string;
  @Field() suspended: boolean;
  @Field() confirmed: boolean;
  @Field() deleted: boolean;
}          

@ObjectType()
export class Role {
  @Field() name: string;          
  @Field() description: string;
}          

@ObjectType()
export class QueryResult {
  @Field() page: number;
  @Field() rowsPerPage: number;
  @Field() totalCount: number;
  @Field(type => [User]) records: User[];
}          

@ArgsType()
export class SearchUsersArgs {
  @Field({ nullable: true }) sortBy?: string;          
  @Field(type => Boolean, { nullable: true }) sortDirection?: Boolean;
 
  @Field(type => Int, { defaultValue: 1, nullable: true })
  @Min(0) page: number = 1;          

  @Field(type => Int, { nullable: true })
  @Min(10)
  @Max(50)
  pageSize = 20;
}

and the resolver has two parametrized queries – one to return a list of users

@Query(returns => QueryResult)
async users(@Args() input: SearchUsersArgs) {
  return await this.UsersProvider.getUsers(input);
}

and another to find one user by id

@Query(returns => User)
async user(@Arg("id", type => Int) id: number) {
  const user = await this.UsersProvider.getUserById(id);
  if (user) { 
    return user;
   }
  throw new NotFoundError("User with given id was not found.");
}

The user properties like name, emailAddress, etc. are resolved by default just by accessing the object properties. For the roles the situation is different. The data is stored in different table in MySQL database. The first idea was to JOIN the data and to include the roles in the query, get the roles along with the users in single database roundtrip.

const users = await User.findAll({
 attributes: {
   exclude: ["password", "password_confirmation", "password_digest"]
 },
 include: [{ model: Role }],
 offset: (page - 1) * pageSize,
 limit: pageSize,
 order: [[sortBy, sortDirection]]
});

I have quickly dismissed this approach as I can’t tell in advance whether the user wants to include the roles in the result set and I may be loading the data from database and not use them later (over-fetching). Even though now there is only one association, this might change in the future, and I would have to include all of them which will result in query JOINing many tables to return names of the users.

The GraphQL-like solution to this problem is to create a resolver for the roles field:

@FieldResolver(returns => [Role])
async roles(@Root() root: User) { ... }

First I had a mismatch in what type was returned by the GraphQL query and because the instances returned by sequelize do have field accessors and the result was as expected for the basic queries to get the name, emailAddress, etc.

The value of parameter root with the @Root annotation is provided by the GraphQL engine. I have two types representing the database entity User and the data transfer object UserDTO. The root here was initially the instance of the model class, i.e. entity loaded from the database, I thought it should be the instance of the DTO class, and it fits better in the world of GraphQL. What I am losing here is the model capability and the access to its properties, fields and association collections (which I should not have access to because I don’t want to over-fetch). But it should return DTO because I don’t know what end-user wants. Returning database entity may leak some information to end-user.

The correct declaration for the resolver for roles field should then be

async roles(@Root() root: User): Promise<RoleDTO[]> { ... }

I query the database to find the corresponding user entity for the root passed in by GraphQL engine.

@FieldResolver(returns => [RoleDto])
async roles(@Root() root: User): Promise<RoleDTO[]> {
  const user = await this.UsersProvider.getUserById(root.id); // fetch the user again
  if (user) {
    const roles = (await user.$get<Role>("roles")) as Role[]; // get roles for current user
    return roles.map<RoleDTO>(r => { 
      return {
        name: r.name,
        description: r.name
      };
   }); 
 } 

 return [];
}

Now, I have a different problem, the SELECT N+1 problem. This resolver will query the database to get the roles for every user that will be returned in the GraphQL query (not to mention the initial query to get the users):

Executing (default): SELECT id, userName, firstName, lastName, email AS emailAddress, confirmed, suspended, deleted FROM m_user AS User WHERE User.id = 15;

Executing (default): SELECT id, userName, firstName, lastName, email AS emailAddress, confirmed, suspended, deleted FROM m_user AS UserWHERE User.id = 76;

Executing (default): SELECT id, userName, firstName, lastName, email AS emailAddress, confirmed, suspended, deleted FROM m_user AS User WHERE User.id = 6;

Executing (default): SELECT id, userName, firstName, lastName, email AS emailAddress, confirmed, suspended, deleted FROM m_user AS User WHERE User.id = 9;          

...          

Executing (default): SELECT Role.id, Role.shortname AS name, UserRole.roleId AS UserRole.roleId, UserRole.userId AS UserRole.userId FROM m_role AS Role INNER JOIN m_role_assignments AS UserRole ON Role.id = UserRole.roleId AND UserRole.userId = 15;          
Executing (default): SELECT Role.id, Role.shortname AS name, UserRole.roleId AS UserRole.roleId, UserRole.userId AS UserRole.userId FROM m_role AS Role INNER JOIN m_role_assignments AS UserRole ON Role.id = UserRole.roleId AND UserRole.userId = 76;      
Executing (default): SELECT Role.id, Role.shortname AS name, UserRole.roleId AS UserRole.roleId, UserRole.userId AS UserRole.userId FROM m_role AS Role INNER JOIN m_role_assignments AS UserRole ON Role.id = UserRole.roleId AND UserRole.userId = 6;          

Executing (default): SELECT Role.id, Role.shortname AS name, UserRole.roleId AS UserRole.roleId, UserRole.userId AS UserRole.userId FROM m_role AS Role INNER JOIN m_role_assignments AS UserRole ON Role.id = UserRole.roleId AND UserRole.userId = 9;

The problem has an easy solution, and it is to use dataloader – in my case sequelize-dataloader since I am using Sequelize to access the database.

The dataloader hooks itself into the Sequelize methods (findByPk in this example) and hijacks them, replacing them with a smart caching mechanism. If you prime the dataloader context with objects and you later want to load an object by id from the database, then the dataloader will check its cache, and if the object is there, it will return it immediately thus avoiding database roundtrip.

import { createContext, EXPECTED_OPTIONS_KEY } from "dataloader-sequelize";

@Query(returns => UserQueryResult)
async users(
  @Args() input: UserSearchArgs,
  @Ctx() ctx: Context
): Promise<UserQueryResult> {
  const context = createContext(User.sequelize); // create dataloader context 
  const found = await this.usersProvider.getUsers(input); // get the users
  context.prime(found.records); // prime the context with found records
  ctx["dataloader-context"] = context; // remember the dataloader context in GraphQL context
  return {
    page: found.page,
    users: found.records.map(this.convert), 
    rowsPerPage: found.rowsPerPage,
    totalCount: found.totalCount
  };
}

And the roles resolver method:

@FieldResolver(returns => [RoleDTO])
async roles(@Root() root: UserDTO, @Ctx() ctx: Context): Promise<RoleDTO[]> {
  const context = ctx["dataloader-context"];
  const user: User = await User.findByPk(root.id, {
    attributes: {
      exclude: ["password", "password_confirmation", "password_digest"]
    },
    [EXPECTED_OPTIONS_KEY]: context // pass in the dataloader context
  });

  if (user) {
    const roles = (await user.$get<Role>("roles", { // nasty TypeScript workaround
      [EXPECTED_OPTIONS_KEY]: context // pass in the dataloader context
    })) as Role[];
    return roles.map<RoleDTO>(r => {
      return {
        id: r.id,
        name: r.name,
        description: r.name
      };
    });
  }
  throw new NotFoundError("User with given id was not found.");
}

Dataloader has another magic feature, it is batching similar database queries until the very last moment. Instead of loading the roles in N round trips for each user, only a single query is executed:

Executing (default): SELECT Role.id, Role.shortname AS name, UserRole.roleId AS UserRole.roleId, UserRole.userId AS UserRole.userId FROM m_role AS Role INNER JOIN m_role_assignments AS UserRole ON Role.id = UserRole.roleId AND UserRole.userId in (15, 76, 6, 9);

In total, only two queries are executed – one to get the users and one to get roles of all users from the previous query result set.

It works now, but the road to this point was quite bumpy. I have discussed the issues I have run into on GitHub with authors of the libraries I have used (for example here). I discussed the problems that only existed on my machine (and were results of my misunderstanding of how the library works), tried and failed miserably to fix some issues (here).

Always create bidirectional relations in your entities; the dataloader will like you more.

Despite the steep learning curve Typescript and GraphQL I will choose next time too. I don’t want to throw out everything I just learned πŸ™‚

Improving readability of the code with enumerable and yield

You have probably written code like this (I certainly did). The use case is not important (it searches for a text describing an enum value). It tries to search for a value in various locations and until a non-null value is found:

string FindDisplayString(object value, string name)
{
  string displayString = null;
  if (resourceManager != null)
  {
    displayString = resourceManager.GetString($"{type.Name}.{name}");
    if (displayString != null)
    {
      return displayString;
    }

    displayString = resourceManager.GetString($"{type.Name}.{value.ToString()}");
    if (displayString != null)
    {
      return displayString;
    }
  }

  displayString = value.GetDescriptionAttribute(null);
  if (displayString != null)
  {
     return displayString;
  }

  return name ?? value.ToString();
}

Then I realized I can do better by using IEnumerable<string> and yield return:

IEnumerable<string> FindDisplayString(object value, string name)
{
    if (resourceManager != null)
    {
        yield return resourceManager.GetString($"{type.Name}.{name}");
        yield return resourceManager.GetString($"{type.Name}.{value.ToString()}");
    }

    yield return value.GetDescriptionAttribute(null);
    yield return name;
    yield return value.ToString();

    yield break;
}

This time the search is “suspended” on every yield line and won’t continue until next value is required. I then can decide whether to continue of stop:

var result = FindDisplayString(i, Enum.GetName(type, i)).FirstOrDefault(s => s != null);

Roslyn analyzer with external references

I was creating a Roslyn powered analyzer to detect proper usage of WinForms components that are not added to a form through designer. When you add control through designer it is properly disposed when the form itself is disposed. The designer also adds field components of type System.ComponentModel.IContainer that is disposed in Dispose method:

protected override void Dispose(bool disposing)
{
  if (disposing && (components != null))
  {
    components.Dispose();
  }
  base.Dispose(disposing);
} 

But if you create, for example a timer, manually, you will have to dispose it manually later. Instead of handling the disposal of that timer this way, it is easier to add it to that container where it will be disposed along with all other controls.

The analyzer worked fine when a new instance of VS.NET was triggered and the debugger was actually debugging the code running in that instance. But this process is somewhat slow and you spend some time waiting for the new instance of VS.NET to be open.

When you create new Roslyn analyzer project using the project template, a test project is created as well with ready-to-run code. Because it is much easier to do it TDD way, I created new test based on the test classes already available. But my tests were failing.

I quickly discovered the problem – in the analyzer code, there is some type checking when I do not want to carry on with the analysis when the class being analyzed is not inherited from System.Windows.Forms class:

var formType = context.SemanticModel.Compilation.GetTypeByMetadataName("System.Windows.Forms.Form");

and the reason the test was failing was that formType is null and that’s because there is no metadata available when the test is executed. But it does not fail in the second instance of VS.NET, because the project is fully loaded with all references there.

So I went straight to Roslyn GitHub repository and searched for mscorlib, System.Linq, etc. and found out that the tests there are adding references through properties like this one:

private static MetadataReference s_systemWindowsFormsRef;
public static MetadataReference SystemWindowsFormsRef
{
  get
  {
    if (s_systemWindowsFormsRef == null)
    {
      s_systemWindowsFormsRef = AssemblyMetadata.CreateFromImage(TestResources.NetFX.v4_0_30319.System_Windows_Forms).GetReference(display: "System.Windows.Forms.v4_0_30319.dll");
    }

    return s_systemWindowsFormsRef;
  }
}

And the most important code here is AssemblyMetadata.CreateFromImage(TestResources.NetFX.v4_0_30319.System_Windows_Forms) which creates the assembly metadata object for System.Windows.Forms assembly. After cloning the solution I discovered that the TestResources namespace comes from external assembly Microsoft.CodeAnalysis.Test.Resources.Proprietary that was added as a Nuget package.

This library contains assembly metadata images for selected .NET framework versions and assemblies (System, System.Data, System.Drawing, …). Note that the only version of that library available is prerelease version.

What was left was to change the code generated by project template to propagate the references

VerifyCSharpDiagnostic(test, new[]
{
  AssemblyMetadata.CreateFromImage(TestResources.NetFX.v4_0_30319.System_Windows_Forms)
    .GetReference(display: "System.Windows.Forms.v4_0_30319.dll"),
  AssemblyMetadata.CreateFromImage(TestResources.NetFX.v4_0_30319.System)
    .GetReference(display: "System.v4_0_30319.dll")
}, expected); 

deeper to where the Compilation is created and change the code there:

var compilation = CSharpCompilation.Create(
   "MyName",
   documents.Select(d => d.GetSyntaxTreeAsync().Result),
   references, // there are those references
   new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary, optimizationLevel: OptimizationLevel.Debug));
// and pass the analyzer 
var compilationWithAnalyzers = compilation.WithAnalyzers(ImmutableArray.Create(analyzer));

And then my unit tests worked! I recommend you to check the unit testing part of Roslyn project for some guidelines on how to test the analyzers and also how to write reusable test components (and rewrite is something my code needs right now).

My first Docker image

You probably heard about Docker by now. Me too. But it took me long to jump on that bandwagon (as a .NET developer). And you probably heard about NoSQL too πŸ™‚

Recently I discoverd RethinkDB

The open-source database for the realtime web

that look really cool and I decided to explore it a little. Since I know myself and I tend to spend ages on setting up on a piece of software (i.e. trial & error approach with heavy help of Google) and in the end it crashes completely I decided to try Docker as well. But … since I am now using Windows 10 and boot2docker using VirtualBox does not work here and VMWare provider for Vagrant is $78 and … stop … end of excuses: I chose CentOS image running in VMPlayer.

So I got CentOS image up and running with docker daemon running, phusion/baseimage-docker image added (because it is special and fixes some stock Ubuntu 14.04 base image issues).

The base file is

# Use phusion/baseimage as base image. To make your builds reproducible, make
# sure you lock down to a specific version, not to `latest`!
# See https://github.com/phusion/baseimage-docker/blob/master/Changelog.md   for
# a list of version numbers.
FROM phusion/baseimage:<VERSION>

# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]

# ...put your own build instructions here...

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Google docker rethinkdb and the second hit is this GitHub repository. So I took the jessie~2.0.4 (the latest version of course) dockerfile

FROM debian:jessie

MAINTAINER Stuart P. Bentley 

# Add the RethinkDB repository and public key
# "RethinkDB Packaging " http://download.rethinkdb.com/apt/pubkey.gpg
RUN apt-key adv --keyserver pgp.mit.edu --recv-keys 1614552E5765227AEC39EFCFA7E00EF33A8F2399
RUN echo "deb http://download.rethinkdb.com/apt jessie main" > /etc/apt/sources.list.d/rethinkdb.list

ENV RETHINKDB_PACKAGE_VERSION 2.0.4~0jessie

RUN apt-get update \
	&& apt-get install -y rethinkdb=$RETHINKDB_PACKAGE_VERSION \
	&& rm -rf /var/lib/apt/lists/*

VOLUME ["/data"]

WORKDIR /data

CMD ["rethinkdb", "--bind", "all"]

#   process cluster webui
EXPOSE 28015 29015 8080

and pasted the code into my base docker file, tried to build the image and got error

The following packages have unmet dependencies:
rethinkdb : Depends: libprotobuf9 but it is not installable
            Depends: libstdc++6 (>= 4.9) but 4.8.4-2ubuntu1~14.04 is to be installed
E: Unable to correct problems, you have held broken packages.

So I google around and found nothing. Then I decided to try to run database from binaries as described on RethinkDB page

source /etc/lsb-release && echo "deb http://download.rethinkdb.com/apt   $DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/rethinkdb.list
wget -qO- http://download.rethinkdb.com/apt/pubkey.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install rethinkdb

And that worked!

That code in docker file however results in error

/bin/sh: 1: source: not found

which can be easily fixed by running this command in docker file

rm /bin/sh && ln -s /bin/bash /bin/sh

That will install RethinkDB startup service and start it! And the administration UI is accessible at http://localhost:8080!

During the installation I noticed different version being installed this time – 2.0.4~0trusty, then I changed RETHINKDB_PACKAGE_VERSION to 2.0.4~0trusty and the database installation succeeded (but the service was not started)

invoke-rc.d: policy-rc.d denied execution of start.

Since I am not a Unix expert I just ignored this add added these lines to use the default configuration (I don’t know how to use external file in docker file)

RUN cp /etc/rethinkdb/default.conf.sample /etc/rethinkdb/instances.d/instance1.conf

In the end that did not work πŸ™ and I went back to the working version (the one with source). Which did not work either, no connection was made to RethinkDB Administation Console and then it hit me! The container is not running

docker run -d -p 8080:8080 -p 28015:28015 -p 29015:29015 dockerfile

And it worked! πŸ˜€ RethinkDB Administation Console

The final version of the dockerfile is here

FROM phusion/baseimage:0.9.17

# fixes 'source: not found'
RUN rm /bin/sh && ln -s /bin/bash /bin/sh

# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]

# ...put your own build instructions here...
RUN apt-get update 

RUN apt-get install -y wget 

# Add the RethinkDB repository and public key
RUN apt-key adv --keyserver pgp.mit.edu --recv-keys 1614552E5765227AEC39EFCFA7E00EF33A8F2399
RUN echo "deb http://download.rethinkdb.com/apt trusty main" > /etc/apt/sources.list.d/rethinkdb.list

ENV RETHINKDB_PACKAGE_VERSION 2.0.4~0trusty

RUN apt-get update \
	&& sudo apt-get install -y rethinkdb=$RETHINKDB_PACKAGE_VERSION 

RUN cp /etc/rethinkdb/default.conf.sample /etc/rethinkdb/instances.d/instance1.conf

VOLUME ["/data"]

WORKDIR /data

CMD ["rethinkdb", "--bind", "all"]

# process cluster webui
EXPOSE 28015 29015 8080

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
rethinkdb_dockerfile.txtdownload file

NInject problem with remote proxies

I have run into following issue while working with NInject:

System.TypeInitializationException: The type initializer for 'ClassA' threw an exception. --->    
System.Runtime.Remoting.RemotingException: Attempted to call a method declared on type 'Ninject.IInitializable' on an object which exposes 'ClassB'.
Server stack trace:
  at System.Runtime.Remoting.Messaging.StackBuilderSink.VerifyIsOkToCallMethod(Object server, IMethodMessage msg)
  at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg)

Exception rethrown at [0]: 
  at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
  at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
  at Ninject.IInitializable.Initialize()
  at Ninject.Activation.Strategies.InitializableStrategy.b__0(IInitializable x) in InitializableStrategy.cs:line 28

The problem is that NInject tries to call Ninject.IInitializable.Initialize() on remoting proxy (ClassB) and that fails since that class does not implement that interface. This call is made by one of the activation strategies in the pipeline.

The workaround is (inspired by Ydie’s blog post) to create new kernel class derived StandardKernel, remove all instances of IActivationStrategy from Components collection, return some of the default strategies back and add my own strategy that does not try to invoke Initialize method when the object being activated is instance of MarshalByRef class.

I don’t have any class implementing IStartable interface, so I’m not re-adding the StartableStrategy, but for complete fix that strategy should also be modified. Or the pipeline can be changed to completely ignore remoting proxies …

public class MyStandardKernel : StandardKernel
{
    protected override void AddComponents()
    {
        base.AddComponents();
        // remove the all activation strategies
        Components.RemoveAll(typeof(IActivationStrategy));
        // add some of the default strategies back (code copied from NInject sources)
        if(!Settings.ActivationCacheDisabled)
        {
            Components.Add<IActivationStrategy, ActivationCacheStrategy>();
        }

        Components.Add<IActivationStrategy, PropertyInjectionStrategy>();
        Components.Add<IActivationStrategy, MethodInjectionStrategy>();
        // I don't need this
        // Components.Add<IActivationStrategy, StartableStrategy>();
        Components.Add<IActivationStrategy, BindingActionStrategy>();
        Components.Add<IActivationStrategy, DisposableStrategy>();
        // this is the new strategy
        Components.Add<IActivationStrategy, RemotingProxyAwareStrategy>();
    }
}

public class RemotingProxyAwareInitializableStrategy : ActivationStrategy
{
    /// <summary>
    /// Initializes the specified instance.
    /// </summary>
    /// <param name="context">The context.
    /// <param name="reference">A reference to the instance being activated.
    public override void Activate(IContext context, InstanceReference reference)
    {
        if(reference.Is<MarshalByRefObject>()) 
        {
            return;
        }

        reference.IfInstanceIs<IInitializable>(x => x.Initialize());
    }
}

NInject modularity let’s you to replace different core components and that’s really great. I’m looking forward to another problem πŸ™‚

Roslyn powered live code analyzer

First there was a problematic WPF binding property, then I had to check all binding properties and then I thought about using FxCop to do that dirty job for me. But unfortunately FxCop is no longer developed and supported. That made me a little bit sad, since I really liked that tool and its power and usefulness.

But then I found this article by Alex Turner Use Roslyn to Write a Live Code Analyzer for Your API and after reading that I no longer mourned for FxCop. The new Roslyn powered while-you-type code analysis and error reporting is incredible!

And I decided to write my own DiagnosticAnalyzer rule. The rule I implemented is the good old CA2000: Dispose objects before losing scope. It is already there among other FxCop rules in the Microsoft.CodeAnalyzers.ManagedCodeAnalysis rule set. But I just wanted to try it and find out how difficult it is to create such rule.

First I installed required VSIX packages, then created project with given template, copied the code from MSDN page and started the investigation on how to check if the object is disposed after all references to the object are out of scope.

First I created a dummy class with both invalid and valid usages of a disposable objects.

This is the code of DummyClass class I used for testing (test project is generated using the VS template and added to the solution).

ο»Ώusing System.IO;
using System.Text;

class DummyClass
{
    private Stream _s;

    public DummyClass()
    {
        /* 
         * almost correct usage: 
         * value of field _s must be disposed later 
         * (maybe the rule can suggest to implement IDisposable interface) 
         */
        _s = Create();

        /* 
         * correct usage: 
         * assigning IDisposable inside using block to variables
         */
        using (Stream a = Create(), b = Create()) { }

        /* 
         * correct usage: 
         * assigning IDisposable inside using block to a previously declared variable 
         */
        Stream c;
        using (c = Create()) { }

        /* 
         * incorrect usage: 
         * not using using statement for declaration and initialization of a IDisposable variable 
         */
        var d = Create();

        /*
         * these lines were added just to prove that the rule is ignoring non-IDisposable variables
         */
        var sb = new StringBuilder(); // declaration and initialization of a non-IDisposable variable  
        StringBuilder sb2;
        sb2 = new StringBuilder(); // assigning non-IDisposable to a previously declared variable
    }

    Stream Create()
    {
        return null; // the real value is not important, return type is
    }

    public void Method()
    {
        /* 
         * incorrect usage: 
         * not using using statement for declaration and initialization of a IDisposable variable 
         */
        var stream = new MemoryStream();
    }
}

Note: I have found it very useful to keep the sample code in a separate compilable file and not in string variable in a test method. The advantage is that you know the code is valid and it is easier to locate the reported error (unless you’re debugging your project running sand-boxed Visual Studio).

Roslyn Syntax Visualizer helped me to identify the nodes I have to check, those are

  • VariableDeclaration – for example var a = DisposableObject();, note that there can be more then one variable being assigned
  • SimpleAssignmentExpression – for example a = DisposableObject();

Action has to be registered to trigger the analysis after the semantic analysis of those syntax nodes is completed

public override void Initialize(AnalysisContext context)
{
    context.RegisterSyntaxNodeAction(AnalyzeNode, SyntaxKind.VariableDeclaration);
    context.RegisterSyntaxNodeAction(AnalyzeNode, SyntaxKind.SimpleAssignmentExpression);
}

Note: I used one callback action for both syntax nodes, but you can register one for each node and make the code cleaner.

And the final result is here – the basic idea is to check the type of RHS node if it implements IDisposable and if it does then check if that is happening inside using block. With one exception when the value is assigned to class field.

ο»Ώusing System.Linq;
using System.Collections.Immutable;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.CodeAnalysis.Diagnostics;

namespace Dev5.CodeFix.Analyzers
{
    [DiagnosticAnalyzer(LanguageNames.CSharp)]
    public class DisposeObjectsBeforeLosingScopeRule : DiagnosticAnalyzer
    {
        public const string DiagnosticId = "DisposeObjectsBeforeLosingScopeRule";
        internal const string Title = "Dispose objects before losing scope";
        internal const string MessageFormat = "A local object of a IDisposable type is created but the object is not disposed before all references to the object are out of scope.";
        internal const string Category = "Reliability";
        internal static DiagnosticDescriptor Rule = new DiagnosticDescriptor(DiagnosticId, Title, MessageFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true);

        public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics
        {
            get { return ImmutableArray.Create(Rule); }
        }

        public override void Initialize(AnalysisContext context)
        {
            context.RegisterSyntaxNodeAction(AnalyzeNode, SyntaxKind.VariableDeclaration);
            context.RegisterSyntaxNodeAction(AnalyzeNode, SyntaxKind.SimpleAssignmentExpression);
        }

        /// <summary>
        /// Gets type symbol for System.IDisposable interface.
        /// </summary>
        /// <param name="compilation"></param>
        /// <returns></returns>
        public static INamedTypeSymbol IDisposable(Compilation compilation)
        {
            return compilation.GetTypeByMetadataName("System.IDisposable");
        }

        /// <summary>
        /// Returns boolean value indicating if the <paramref name="typeInfo"/> implements System.IDisposable interface.
        /// </summary>
        /// <param name="typeInfo">TypeInfo to check</param>
        /// <param name="compilation"></param>
        /// <returns></returns>
        private static bool IsDisposable(TypeInfo typeInfo, Compilation compilation)
        {
            if(typeInfo.Type == null)
            {
                return false;
            }
            return !typeInfo.Type.IsValueType && typeInfo.Type.AllInterfaces.Any(i => i.Equals(IDisposable(compilation)));
        }

        private void AnalyzeNode(SyntaxNodeAnalysisContext context)
        {
            var semanticModel = context.SemanticModel;
            var compilation = context.SemanticModel.Compilation;
            // are we inside using block? i.e. is the Parent of current node UsingStatement
            var insideUsingStatement = context.Node.Parent is UsingStatementSyntax;

            var declaration = context.Node as VariableDeclarationSyntax;
            // variable declaration node
            if (declaration != null)
            {
                // more than one variable can be declared
                foreach (var declarator in declaration.Variables)
                {
                    var variable = declarator.Identifier;
                    var variableSymbol = semanticModel.GetDeclaredSymbol(declarator);
                    var eq = declarator.Initializer as EqualsValueClauseSyntax;
                    var varTypeInfo = semanticModel.GetTypeInfo(eq?.Value);
                    // non-disposable variable is declared or currently inside using block
                    if (!IsDisposable(varTypeInfo, compilation) || insideUsingStatement)
                    {
                        continue;
                    }

                    // report this
                    context.ReportDiagnostic(Diagnostic.Create(Rule, declarator.GetLocation()));
                }
                return;
            }

            var assignment = context.Node as AssignmentExpressionSyntax;
            if (assignment != null)
            {
                // does the type of the RHS node implement IDisposable?
                var typeInfo = semanticModel.GetTypeInfo(assignment.Right);                
                if (!IsDisposable(typeInfo, compilation))
                {
                    return;
                }

                var identifier = assignment.Left as IdentifierNameSyntax;
                var kind = semanticModel.GetSymbolInfo(identifier).Symbol;
                // assigning field value or currently inside using block
                if (kind?.Kind == SymbolKind.Field || insideUsingStatement)
                {
                    return;
                }

                // report this
                context.ReportDiagnostic(Diagnostic.Create(Rule, assignment.GetLocation()));
                return;
            }
        }
    }
}
DisposeObjectsBeforeLosingScopeRule.csview rawview file on GitHub

During the development of the analysis rule I found that the syntax visualizer is really really helpful (but if you have it installed it is good to Pin it for the sample code only, VS was quite sluggish when I switched tabs to the rule file). Google is also very helpful (as usual) since I was struggling to get the type and symbol infos.

But overall I am super excited about this functionality, writing the rules is not that difficult, the live analysis is pretty impressive and the possibilities are infinite!

The code is available on GitHub.

How Powershell helped me to solve the assembly conflict

In one of my test projects the tests suddenly started to fail. Which is a bad thing. What was was worse and strange was the reason why they were failing

System.TypeInitializationException : The type initializer for 'RMReportingPortalDataLayer.Strategies.ReportDeliveryStrategy' threw an exception.
----> System.IO.FileNotFoundException : Could not load file or assembly 'log4net, Version=1.2.11.0, Culture=neutral, PublicKeyToken=669e0ddf0bb1aa2a' or one of its dependencies. The system cannot find the file specified.}}

First I checked the bin directory and the assembly was not there. Then I checked the build log and found this line

No way to resolve conflict between "log4net, Version=1.2.13.0, Culture=neutral, PublicKeyToken=669e0ddf0bb1aa2a" and "log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=692fbea5521e1304". Choosing "log4net, Version=1.2.13.0, Culture=neutral, PublicKeyToken=669e0ddf0bb1aa2a" arbitrarily.

Ok, there is a conflict between incompatible versions of log4net assembly. But which of the assemblies referenced in my project are suspects? How to check all the referenced assemblies of references assemblies? It’s task nobody would like to perform manually. I like Powershell so I googled around and found this post and made few changes – build the hash and write the formatted output to console:

$hash = $references | Group-Object Name, Version -AsString -AsHashTable

$hash.GetEnumerator() | Sort-Object Name | % { 
  $key = $_.Key.ToString().Trim()
  $value = $_.Value
  Write-Host $key
  $s = [string]::join([Environment]::NewLine + '   * ', ($value | Select-Object -ExpandProperty Who | Get-Unique | Sort-Object Who))
  Write-Host '   *', $s
}

Which produces nice output and all is clearer now πŸ™‚

log4net, 1.2.11.0
* Lib.Shared.Admin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3941ae83427745cf
* Lib.Shared, Version=10.0.0.0, Culture=neutral, PublicKeyToken=3941ae83427745cf
log4net, 1.2.13.0
* Lib.Common, Version=0.2.10.0, Culture=neutral, PublicKeyToken=null

I was referencing older version of Lib.Common library, so I updated it to latest version and problem was solved!

Note that the Powershell loads all the assemblies in the folder and keeps them loaded until the Powershell window is closed. Or you can spawn new Powershell process:

$command = [System.IO.File]::ReadAllText("path to dependencies.ps1")
$bytes = [System.Text.Encoding]::Unicode.GetBytes($command)
$encodedCommand = [Convert]::ToBase64String($bytes)
powershell -NoProfile -EncodedCommand $encodedCommand # |%{$_}

The script is not perfect and you have to change the path to bin directory …