Allow anonymous SPA users to access secure resources

I don’t know how many of you have faced this dilemma where your SPA has to access protected resources outside its domain before the user has logged in? Let’s say, you need to display products to all users, whether signed-in or not. You have a Products API which is protected using OAuth2 Client Credentials flow. For security reasons you don’t want to publish this API with anonymous access, but still allow anonymous users to benefit from it. The client would be configured to use OAuth2 Implicit/Hybrid Flow.
Below is a standard flow of Client Credentials.

OAuth2 Client Credentials Flow

The SPA will have access token only when the user has logged in to the Authorisation server. The client can access the protected API using the access token which has the required scope. We are not speaking of token management here, but client secret management.

Now when a user is still anonymous, we still need to provide them with data from our API. Secret management on client side is not a viable option. The only place client secret can be safely stored is with a trusted client which runs on the server side.
There is an active doc by IETF which guides you on using OAuth with browser based apps here. Based on their recommendation at 6.2 we can move the authentication and token management to the ASP.NET Core app which shares the domain with the SPA. The token/session management would be done by Cookies with HttpOnly and Lax or Strict SameSite mode. Check out this blog for more details on SameSite Cookie.

This way we are moving all the authentication to the server and a machine to machine communication takes place through which a token is requested on behalf of the clients.
The BFF (Backend for front end) architecture is more common with Microservices and sometime also referred to as API Gateway. Personally, in this scenario I would lean more towards calling this as a BFF pattern since this nomenclature makes the intent more clearer that we are introducing this API specifically for a front-end, namely SPA.

Let us see what goes in to implement such an API.
In the same ASP.NET Core site which serves the static content you can setup session management and forwarding the requests to the backend API along with attaching a valid access token.
In Startup.cs

1
2
3
4
5
6
7
8
9
10
11
12
private const string Authority_TokenEndpoint = "http://localhost:62000/connect/token";
private const string token_cookie_name = "code.token";
public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.HttpOnly = Microsoft.AspNetCore.CookiePolicy.HttpOnlyPolicy.Always;
options.MinimumSameSitePolicy = SameSiteMode.Strict;
});
services.AddHttpClient();
services.AddProxy();
}

You might have noticed line AddProxy(). This is from ProxyKit. A light-weight, code-first HTTP reverse proxy. In this case it deals with the mundane task of forwarding requests to the back-end API with ease and style. It is very powerfull and you can do a lot more with it.
Let is see how we use it next.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseCookiePolicy();
string tokenForCookie = "";
app.Use(async (context, next) =>
{
// get access token
var factory = app.ApplicationServices.GetRequiredService<IHttpClientFactory>();
var client = factory.CreateClient();
var tokenResponse = await client.RequestClientCredentialsTokenAsync(
new ClientCredentialsTokenRequest
{
Address = Authority_TokenEndpoint,

ClientId = "securebff",
ClientSecret = "secret",
Scope = "externalapi"
});
// check if we have a valid response
tokenForCookie = tokenResponse.AccessToken;
var expiresin = DateTime.Now.AddSeconds(tokenResponse.ExpiresIn);
// save to cookie
CookieOptions options = new CookieOptions
{
IsEssential = true,
HttpOnly = true,
SameSite = SameSiteMode.Strict,
Secure = env.IsDevelopment() ? false : true,
Expires = expiresin
};
context.Response.Cookies.Append(token_cookie_name, tokenForCookie, options);
await next();
});
app.Map("/external", api =>
{
api.RunProxy(async context =>
{
var forwardContext = context.ForwardTo("http://localhost:5000/api/test");

var token = string.IsNullOrEmpty(tokenForCookie) 
? context.Request.Cookies[token_cookie_name] 
: tokenForCookie;

forwardContext.UpstreamRequest.SetBearerToken(token);
forwardContext.AddXForwardedHeaders();
// add retry on 401 or other conditions
var response = await forwardContext.Send();
return response;
});
});
app.UseStaticFiles();
}

That’s it! As we receive a request, it creates a new token. Here I am using IdentityModel.Client to retrieve an access token from the STS end-point. It creates a cookie with the provided options and saves the token. If the request is for an external API protected by the access token, then the map would attach the current access token and forward the request to the external API and return the response.
Of course I have omitted checks like cookie existence, token expiration and skipping token retrieval for every requests for brevity. Also the STS and external API are not shown here, but there is nothing different about them here.

For completeness here is the new flow.

OAuth2 Client Credentials Flow with BFF

Some parting notes to consider. The activation of BFF component need not be that long when the token is retrieved from the cookie for subsequent requests. The SameSite cookie helps in stopping CSRF attacks. I am not sure if this completely shields you in XSS attacks. You would be adding additional server round-trips. It’s for you to decide if this choice suits your architecture.

Entity Framework Core Owned Types explained

Owned entity was made available from EF Core 2.0 onwards. The same .NET type can be shared among different entities. Owned entities would not have a key or identity property of their own, but would always be a navigational property of another entity. In DDD we could see this as a value/complex type. For those coming from EF 6, you may see a similarity with complex types in your model. But the way it works and behaves in EF Core is different. There are some gotchas you need to watch out for. We’ll explore these in detail here.

Let us work with a model shown below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class Student
{
public int Id { get; set; }

public string Name { get; set; }

public Address Home { get; set; }
}

public class Address
{
public string Street { get; set; }

public string City { get; set; }
}

Here Student owns Address which is the owned type and does not have its own identity property. Address becomes a navigation property on Student and would always have an one-to-one relationship (at least for now).

The DbContext would be defined like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class StudentContext : DbContext
{
public DbSet<Student> Students { get; set; }

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home);
}

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer("Server=(localdb)\\mssqllocaldb;Database=StudentDb; Trusted_Connection=True;App=StudentContext");
optionsBuilder.EnableSensitiveDataLogging();
}
}

An owned type cannot have a DbSet<> and OnModelCreating you can specify the Home property as Owned Entity of Student.

Home would be mapped to the same table as Student.

Let us fire up this model and see it working.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
using Microsoft.EntityFrameworkCore.Infrastructure;
using Microsoft.Extensions.Logging;

class Program
{
static void Main(string[] args)
{
var _context = new StudentContext();
_context.GetService<ILoggerFactory>().AddConsole();
_context.Database.EnsureDeleted();
_context.Database.EnsureCreated();

InsertStudent(_context);
}

private static void InsertStudent(StudentContext context)
{
var student = new Student
{
Name = "Student_1",
Home = new Address
{
Street = "Circular Quay",
City = "Sydney"
}
};
context.Students.Add(student);
context.SaveChanges();
}
}

I have added Microsoft.EntityFrameworkCore.SqlServer and Microsoft.Extensions.Logging.Console packages.
From the console logs we see that we have Students table created and a row inserted.

1
2
3
4
5
6
7
CREATE TABLE [Students] (
[Id] int NOT NULL IDENTITY,
[Name] nvarchar(max) NULL,
[Home_City] nvarchar(max) NULL,
[Home_Street] nvarchar(max) NULL,
CONSTRAINT [PK_Students] PRIMARY KEY ([Id])
);

To query, just get the students and the owned entity is also included.

1
var students = _context.Students.ToList();

We can also store Address in another table, which we can’t do with complex types in EF6. Simply call .ToTable() and provide a different name.

1
2
3
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home)
.ToTable("HomeAddress");

Now when you run the app, you would see 2 tables being created. Note the identity column for the HomeAddress table. It is referencing Students table’s identity.

1
2
3
4
5
6
7
8
9
10
11
12
CREATE TABLE [Students] (
[Id] int NOT NULL IDENTITY,
[Name] nvarchar(max) NULL,
CONSTRAINT [PK_Students] PRIMARY KEY ([Id])
);
CREATE TABLE [HomeAddress] (
[StudentId] int NOT NULL,
[City] nvarchar(max) NULL,
[Street] nvarchar(max) NULL,
CONSTRAINT [PK_HomeAddress] PRIMARY KEY ([StudentId]),
CONSTRAINT [FK_HomeAddress_Students_StudentId] FOREIGN KEY ([StudentId]) REFERENCES [Students] ([Id]) ON DELETE CASCADE
);

You can ignore properties which you do not want EF tracking.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class Address
{
public string Street { get; set; }

public string City { get; set; }

public string State { get; set; } // ignore this
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home, (h) =>
{
h.Ignore(a => a.State);
h.ToTable("HomeAddress");
});
}

There are certain elements to keep in mind especially with change tracking. With EF core do not assume the same code of EF6 would give you similar behaviour. This is especially true with Change Tracking. In my view these changes are welcome and makes tracking more intuitive and easy to navigate.

When you use Add, Attach, Update or Remove on either DbSet<> or through DbContext, it effects all reachable entities. Here is what it would look like:

1
context.Students.Add(student);

This would also mark Address in a Added state.
But if you do not want to track all the entities in the graph:

1
context.Entry(student).State = EntityState.Added;

When you do this only the student is marked for insert and address is not. So how do you only change the state of address?

1
2
var address = _context.Entry(student).Reference(s => s.Home).TargetEntry;
address.State = EntityState.Unchanged;

When you mark an entity in the graph for update, all the properties are marked for update. In a disconnected (n-tier) scenario, you would need to track changes on your entity externally and let EF know about the changes. You need to have the original state of the entity and do some processing to know which properties were changed. Or you could go back to the database, get the entity and compare it’s state.

1
2
3
4
var entry = _context.Attach(student);
var dbValues = entry.GetDatabaseValues(); // gets only the student
entry.OriginalValues.SetValues(dbValues);
_context.SaveChanges();

This would only update those columns which had any changes on them. But it would only affect the student object and not the address. The address would still be in an Unchanged state. The above entry.GetDatabaseValues() would fetch only student values and not address. For you to track changes on address, you would need to explicitly check on its entity.

1
2
3
4
5
var entry = _context.Attach(student);
var adEntry = _context.Entry(student.Home);
adEntry.OriginalValues.SetValues(adEntry.GetDatabaseValues()); // gets home address
entry.OriginalValues.SetValues(entry.GetDatabaseValues()); // gets student
_context.SaveChanges();

Now on SaveChanges(), it would issue update on Address too if it found any changes.

Windows 10 Fall Creators update crashes App Pool

Windows 10 Fall Creators update was not yet available on my PC, so I manually pulled the update. The upgrade seems to have run fine except that when I tried starting one of my development services hosted on IIS, it did not. Instead I saw Service Unavailable HTTP Error 503. I checked the application pool assigned to this web site and it had stopped.

Under windows event log I saw this
IIS-W3SVC-WP(2307)

The worker process for application pool <Pool Name> encountered an error ‘Cannot read configuration file’ trying to read configuration data from file ‘\\?\<EMPTY>’, line number ‘0’. The data field contains the error code.

I knew for sure that the update had caused this issue as I was working on this particular web site just before the restart prompted me to close down my work.

I started looking at the user account under which I was running the app pool. It checked out fine. Next I just went and cleared off all the files under the Inetpub\temp folder. After restarting the services the web site came up without fuss this time.

I got curious since I had no idea what had caused the issue in the first place and started searching for support articles and came across this Web applications return HTTP Error 503 and WAS event 5189 on Windows 10 Version 1709 (Fall Creators Update)

This explained why I was facing the issue though the error message and the event logged was different. You also require to stop W3SVC service which the article missed(?) without which some files cannot be deleted and Remove-Item fails.

Solution
Stop “Windows Process Activation Service” and “W3SVC” service and clean out (delete) all the files under C:\Inetpub\temp\AppPools*. Start your services and the sites should be back to work.

WCF - One-way or the other

WCF One-way

I have always found WCF to be a great technology for many use cases. Before I ruffle anyone’s feathers out there, I love what Web API is capable of and if I am looking at providing HTTP services or anything which is targeted over internet, I would blindly choose Web API.
I am also eagerly waiting to see WCF service framework becoming a part of .NET Core. We already have the WCF client libraries available for the .NET Core version.
Having cleared that up, let me get back to one such use case for WCF, making a fire and forget call or one-way. This is useful when the client truly does not bother about the result or when it needs to kick off a process on the server and does not want to wait for it to finish, usually long running.
WCF’s comes with great variety, power and flexibility, but to truly harness it, one needs to have a deep understanding of its internals. You can use it out of the box without much mucking around, but sometimes it’s behaviour may not be obvious.

A quick recap on WCF one-way pattern.
The default behaviour of a service is request-reply pattern and to make it one-way you simply set the OperationContract as IsOneWay.

1
2
3
4
5
6
[ServiceContract]
public interface IOneWayService
{
[OperationContract(IsOneWay = true)]
void Process(int seed);
}

A few things to keep in mind when decorating an Operation as one-way.

  • The method has to return void
  • You cannot return Faults to the client. That means you cannot decorate the operation with FaultContract(typeof(Exception)) attribute.

Even if you unintentionally did the above on an one-way operation, the service would throw an error when attempting to start it.

Your service implementation is going to be nothing different here and so would the hosting part of it. So I won’t be delving in to it. You can simulate some load in it by doing a Thread.Sleep. So, would one get fire and forget operation form your client’s? It depends.
In fact it depends on quite a few things. First let us see what we can expect with our current implementation. Let me show you my client proxy first.

1
2
3
4
5
6
7
public class OneWayProxy : ClientBase<IOneWayService>, IOneWayService
{
public void Process(int seed)
{
Channel.Process(seed);
}
}

I like to hand-code my proxy / client for the service. I’ll probably keep that for another post. I also implement my clients different to what I have shown here, but this too is better than the freebie client you get from Visual Studio.

Let us look at the code calling our client.

1
2
3
4
OneWayProxy proxy = new OneWayProxy();
proxy.Process(5);
//proxy.Process(10);
proxy.Close();

The input parameter is just to make the service look important.

Here is what we see from running this implementation.

  1. The call to the proxy would be asynchronous (at the client). You can uncomment the next call and verify that. They would not block.
  2. Closing the channel might block. If the binding used is NetTcpBinding, by default it supports transport-level sessions. Which means the channel is kept open until the server completes processing all the client’s calls. If you use transport without a session, like basicHttpBinding, then the closing of channel would not block.
  3. The calls are dispatched synchronously on the service. Meaning, your next call would only get processed after completing the previous.

So what we learn is using One-way throws up a few surprises. It is fire & forget only for the operation calls. When you are having long running processes, you might not always want to wait for the operations to complete to close the channel. And Yes! You should always close the channel so that they are returned back to the servers pool.

Since most uses of WCF within the firewall would use or prefer TCP protocol over HTTP for speed and security and I would like to close my channel after each call is made,but not have it blocked, the above implementation would not be the most useful.

So what are our options here?
To close the proxy before the operation finishes -

  • We can use a session less transport, such as BasicHttpBinding or turn on Reliable session on NetTcpBinding. TCP binding provides reliability at transport level, but you get message level reliable session by enabling explicitly at the binding. But this comes at a cost overhead since a lot more messages are exchanged between the client and server to ensure this. This would also not give the best performance due to its chatty nature. <reliableSession enabled="true" />

  • You can mark the NetTcpBinding by turning off the session support. This is done by marking the binding as ‘oneWay’. This requires you to create a custom binding which turns off session support on NetTcpBinding. This would not block the channel from closing. In the below example we have adding tcpTransport support.

    1
    2
    3
    4
    5
    6
    7
    8
    <bindings>
    <customBinding>
    <binding name="onewayBinding">
    <oneWay />
    <tcpTransport />
    </binding>
    </customBinding>
    </bindings>

Remember to use the same binding at the client also.

By default all calls from a client would be processed synchronously. If more than one call were made from a client, they would be queued up on the server. The proxy would also be blocked from closing until all the processing is completed.

To enable concurrent processing of messages at the service-

With the above custom binding, it will also ensure that messages are processed concurrently. Which means each request would be processed on a different thread. Or you could again use basicHttpBinding.
However if you are using NetTcpBinding or session shaped binding, you should mark your service concurrency mode as Multiple.

Migration: Legacy application and trigger happy

Many of my major projects of my career have been in application migration involving rewrite using a new technology. I can think of only one instance where it was a big bang approach, where we built a new system and in one go replaced the other system. It wasn’t as simple as it sounded, but we could justify this approach and it worked well.
However every other time, we have built the new system invariably adding new features while keeping the old system running and accessible.
There have been many different approaches to this strategy. While in some cases, we would force the users to switch to the new application when available while disabling features from the old. Other times we would keep the old app running in parallel while making continuous releases on the new application. We have even had to maintain and service the old application to keep the business happy and sometimes to make the application gel with the new features being introduced in the newer application, especially when database changes were involed.

I’ll go through one such case where we had to rewrite a critical application while keeping access to the old system. We discovered that a lot of business functionality was kept in database triggers. There were many apps and I guess at that point someone decided to use triggers since they then needn’t touch any of the apps to add new features. You can read my take on business in database here.

We had decided to use DDD approach and this meant consolidating all those business rules in the trigger in their respective domains. The database being a common layer for both the applications, called for some strategy here.

The triggers couldn’t simply be disabled nor could we have them trigger when the new applications interacted with those underlying tables.

We need to restrict those triggers to only the old applications. The triggers will fire, no stopping that. But, we’ll stop the triggers body from continuing execution of the SQL. There are couple of ways of doing this.

  1. Using App Name

    This is the least impact way of determining if the trigger needs to continue execution or return. Providing an app name in the connection string ;app=YourApplicationName, would make it available in your SQL session. I also make it a practice to include this as it helps in profiling your database more easily. In your SQL (Trigger) you can now check

    1
    if (APP_NAME() = 'YourApplicationName') RETURN;
  2. Using a column to track the transaction source

    Depending on your situation, this might be intrusive or acceptable. If this is a new column, you might want it to have a default value so that older applications can use that. You should set a specific value from your new applications for each transaction it does with these specific tables. Check for these values in the trigger and exit from executions. Of course the trigger would execute completely for transactions from your older applications. With this column you can also gain some insight in to who is transacting with which app. Be sure to check correctly in inserted or deleted special table depending on the operation that activates the trigger.

In both the above methods, there would be no change in the legacy applications.

BTW, many of the triggers were good candidates for domain events.

I have used both approaches and they have worked well for me. If there are other ways of doing this, let me know in your comments.