Little update about my job after 8 months

End of June our fiscal year ended. After a lot of travel this month I finally had some time to spend time with my family. My mom is visiting and was able to watch my daughter Lisa so my wife could join me in Washington, where I was for Identiverse and later travel to visit friends near New York. June was the heaviest travel month for me so far. I spend 2 nights at home. But this weekend I spend time away from home WITH family and enjoyed a nice time at the water in Bremerton. That also gave me some time to reflect and look back at my new job so far.

Travel

To summarizes my job which I started in October 2018, tons of travel! Before I joined this team, I had a year I didn’t travel at all and since I started this new role I have been around the world. I have seen many different places and met a ton of new people. I learned a ton of new technology and visited many conferences. Time really has flew by since I started.

The video above is build with the mobile app ‘app in the air’. It reads all my trip-it information (the app I use to organize my travel) and creates a nice little video. As you can see I have sit in a plane a lot.

Since I started the job I flew 128257 real miles, sat in the plane for 286 hours for 54 flights. If you look at the trip-it stats I have traveled 108 days for 12 trips, visited 16 countries and 33 cities. This resulted in being Delta Diamond for the first time in my life (125.000 qualifying miles needed, I have 141.504 so far this year alone). Got me to Platinum level at the Marriott, spent plenty of nights in other brand hotels as well.

To keep my daughter involved in all the time away from home we bought a world map and we set pins on the places I still need to go and where I am at the moment (golden pin). I also send postcards of all the places I travel (tip from Colene). So far Lisa received 15 post cards, Milan and Johannesburg cards never arrived). Some cards take 5 weeks to arrive, while others take a week.

IMG_20190707_200336

I started most of the work for Ignite the tour where we had to present on our Identity platform and I had to man the Azure Active Directory booth. One thing I learned; booth duty is a enormous good way for ramp up. I would recommend any new hire to man the booth for a couple of days. You might not know any answers when you start but that forces you to figure out the answers and it’s great for your internal network. It also forced me to understand more than just the developer platform.

Conferences

As my job describes; I presented at a lot of different conferences across the world. Part of the job is trying to get into the door of other non-Microsoft conferences. You need to build a bit of a name of yourself before you get selected and invited for conferences. Fortunately I still know some people who were generous enough to offer a speaking slot at their conferences. I also delivered a ton of different developer trainings around the world. I was fortunate enough to start this job with the help of my colleague Kyle Marsh. So it was an easier start because I was able to ask a ton of questions. Besides Kyle there are a ton of other folks I started to get to know who can help me with my endless list of questions. We are still figuring things out together. The interesting part of giving developer training is you really need to understand and know how things work. I still run into things which I think are not logical or hard to explain to developers. Most developers we train are not familiar with modern authentication and authorization. Terms like auth2 and OIDC are completely new to them. We try to explain the new way to integrate with Azure Active Directory in a way they don’t really need to understand how those protocols work.

A few conferences stood out to me:

Identiverse

This conference was held in June in Washington DC. Everybody who is anybody in the identity space is at this conference. It felt like a small family. Interesting content but more so, very interesting people. You realize these folks are the people who invented a lot of things which makes the internet as we know it more secure. It was also very clear Microsoft is one of the leaders in this space. My colleague Libby demoed our FIDO2 integration with our platform and that got a huge applause from the audience (and the folks in the audience really understand the importance of this)

Techorama Belgium

I finally got the chance to present and attend Techorama in Belgium (1700 attendees). Together with my colleague Kyle Marsh we delivered a paid pre-conf 1 day developer workshop. And I presented a session at the conference. This conference was very well organized and it was great to see a lot of familiar faces and catch up. Fortunately I am presenting at Techorama in NL in October as well.

NDC Oslo

This was one of the best organized events I have ever been. Especially the food the entire day was smart, no huge lines during lunch rush hour. Also a ton of familiar faces and tons of very well known speakers. I hope I can get on stage for this conference in the future. I attended a workshop from Brock Allen on ASP.Net middleware and Identity Server. One of the better trainings I ever attended and got me a ton of knowledge on our own platform as well. I returned home with a lot of questions on how and why we implemented certain features in Azure Active Directory Smile

People

What I like most about this job is meeting new and familiar people. I love working with (enterprise) developers. And being on the road again helps me meet so many of you. I learn a ton. As part of my job is not only be the developer voice of our Identity organization, it’s also bringing back feedback and insights. Every time I talk to a developer I learn something new (or get confirmation off something we already knew).

Coming time

Part of the job is also a ton of customer/ISV meetings and calls to talk and help through different architectural discussions. How do you do X, how do I add external identities. What’s the best way to developer multi-tenant solitions etc. We also support our internal teams at Microsoft. Still cool if you have a call with some developers from Minecraft and you are able to come up with an architecture they need to implement a certain requirement.

The coming time I am focusing on creating more developer training content. We are scaling up our efforts to also train more field people (MS colleagues who also need to talk about security with our customers) on our developer content. I plan to submit to more conferences to try to get a speaking slot. We will create more developer content in a box which can be used by field and MVPs to redeliver the training we have been delivering all over the world. Although the content is still changing we think we are currently in a fairly good spot.

I also want to create a few blog posts with little nuggets of information and things I learned. I hoped to do that more during my learning process but to be honest. I have been very very busy to ramp up and deliver the content all over the world I didn’t find time to do that.

1 thing I didn’t expect with all the travel is how much tired I would be. When traveling you think, I have so much time in the plane. When I am at the location I have so much time at night since I am not at home, but most of the time I am just tired, jet-lagged, hungry. Tons of preparations to do for the trainings and presentations. The work from Redmond with all the calls and customer calls continue when you are traveling too. So you make tons of ours and just a few hours of sleep a night before heading back to home and try to have a social and family live and perhaps spend some time continue the remodel which is not finished yet Smile.

We signed up for 20 cities for Ignite the tour this year (Tokyo, Singapore are new cities for me). We divided it with the 2 of us. So hopefully we can hire new people to join us for this tour to lessen the burden on travel time a bit. On the other hand, this gives me the opportunity to travel to Australia again for example and visit my buddy Roel. There are absolutely benefits of travelling the world.

So far it had been a great experience, I learned a ton. Sandra and Lisa have been great supporters. Fortunately we can hire 2 more persons in the team which should help cut back some of the travel which has been a bit crazy.

Configure Domain_hint in asp.net core

This took me way to much time to figure out since there is a ton of old information on the internet. I wanted to change the default behavior when people are logging in to my ASP.NET Core website using Azure Active Directory (or Microsoft Identity Platform). After some searching I figured out how to change this setting.

You have to add the following piece of code to the ConfigureService method in your Startup.cs

Same trick works for Login_Hint

Hope this saves me some time next time I am looking for this information.

Switching to Google Fi

Last week I switched all the mobile lines of my family to Google Fi. We had t-mobile for some time but I wanted to try and see how Google Fi works.

Since I am going to travel a bit for work, I was looking for a new phone which could work at least a working day without charging and gives me great coverage. I also wanted a plan with works great when abroad. t-mobile already has excellent coverage world wide and free text and data, but the speed is limited. For $5 a day you can get to regular speeds (1GB for the day). Google Fi has international data included in their plan, so that sounded interesting. In the past when I checked them out, it only worked with a few select phones. Recently they added the possibility to use any Android phone and even iPhones work today. All it takes is installing the Google Fi app and you’re off to go (with your own phone you can test it free for a month, you can always port your number later if you want to).

What’s unique about Google Fi is that it uses 3 operators in the US and picks the strongest one or one of 2million+ Wi-Fi hotspots) and switches for you automatically to give you the best connection. (Sprint, T-Mobile, U.S. Cellular) You can check out the coverage map here. It can also protect your connection by automatically using a VPN (yes, you have to trust Google, but you already are since you use Android Smile)

So I ordered a Pixel 2XL since that gave me a $300 credit on Google Fi. This made a bit cheaper than the newer Pixel 3XL. (I love the phone, battery life is excellent and so is the speed, can’t wait to test it internationally)

The Pixel 2XL has an eSIM, that means you don’t have to put in a separate SIM card (you can if you want). You download the Google Fi app and you active your line through the app. Porting the number was done in less than 2 minutes. (you need your number and the pin-code you setup with t-mobile).

I signed up my wife and daughter too, few day later the SIM card arrived in the mail and I popped them in their phones, started the Google Fi app and transferred their numbers. All set and good to go.

On the website or in the app you can see more details about your usage. Everybody can see in depth their own usage and which app uses how much (that’s something, I as a plan owner, cannot see, I can only see the total usage per person)

image

Adding a data SIM was easy too. You order one for free on the website. You navigate to fi.google.com/data and enter the code on the card which holds the SIM, the gmail account you are logged in with determines to what person the data SIM is attached. Activating it was easy. I had to add an APN to my extra phone manually (h2g2) but after that it just worked.

So all in all I am quite happy. Simple model with voice and text, data bundle is easy too. Doesn’t matter if you are in the US or abroad and if you are tethering or not. Also easy to get data SIMs if you have devices which need to be online (I can’t wait for all my laptops to have a SIM slot)

So how does t-mobile compare to Google Fi?

My monthly plan with t-mobile was $126 per month total, this was for 3 lines (I somehow got a free 3rd line in the past). This interesting things included:

  • Unlimited talk, text and data (2GB-22GB) hotspot amount is limited
  • Streaming like Netflix doesn’t eat away from your data bundle. But by default it’s optimized for DVD-quality (480p)
  • In-flight texting on all Gogo-enabled flights

There is a bunch of extra things you can take, voicemail transcriptions etc. But you have to pay extra for those. Since I started my service with t-mobile they introduced new plans where you only pay $100 including taxes and you get a free Netflix subscription as well. So not bad at all.

But when I looked at my usage from my family I saw we only use around 3GB per month total. So let’s look how what Google Fi charges.

The first line is $20 per month (plus Taxes and fees so add another $5). This gives you unlimited talk, text. You have to pay for data! BUT only for the first 6GB, it’s $10 per GB so you never pay more than $60 for the data, after that it’s free. They call it bill protection.

Every extra line costs $15 per month. So for 3 lines I pay $50 plus taxes and fees. The bill protection with 3 lines kicks in at 12GB(!) so this would costs me more than with t-mobile, but my average for the family is only 3GB.

T-mobile charges $20 for an extra line for tablets and $10 for smartwatches. Google Fi doesn’t charge anything for a data SIM, you can order as many as you want, they just eat into the same data from your plan. This I like a lot.

What I also like about Google-Fi is you can use tethering on your phone as well, again the same data from your plan. This also goes for anything you do internationally. It’s all just the same data bundle.

So give it a try. You can use my link https://g.co/fi/r/56XFYR this gives you $20 credit (and I get some too Smile)

What to pack for business travel?

For my new job, I need to travel a lot again. So instead of giving tips on how to fold your underwear so you can travel 3 weeks with only carry-on, I will share some of the stuff I take with me during travel.

image

Since I will be delivering presentations, demo’s and give training I travel with at least 2 laptops. In case 1 stops working, but also to have 1 ready to download stuff you might need to recover the other device in case you get a corrupt OS or something like that.

For this trip to Sydney and Berlin, I will pack 2 Windows machines. I might bring a Mac as the 2nd machine instead the next time, but for this trip that won’t be needed. So I’ll bring my Surface Laptop (all-time favorite) and as a backup the Surface book (1). 2 power adapters so I can charge them both at the same time.

imageimage

I have 2 external drives with presentations, demos, and other stuff I need to use to help prep myself.

image

it’s the Samsung T5 250Gb SSD since they are super fast USB-C SSD drives. I had these for some time, The bigger ones are very affordable too. Very useful if you need to copy virtual machines, ISO files etc.

They also have a copy of Win10 and an offline install of Visual Studio 2017 and VSCode. The offline version of VS2017 is important since a regular install will download tons of stuff (like the Android emulators) from the internet and that’s no fun if you are stuck with crappy hotel Wi-Fi.

I’ve set up all my accounts with 2-factor auth. If it happens you don’t have cellular reception or Wi-Fi access for your phone that might be an issue. So I also set the accounts up to accept the codes the MS Authenticator app gives you. An added benefit, you can log in in your sites (like the Azure portal) from your laptop on the plane, where you don’t have phone reception. I also bring my Yubikey to be able to access my accounts. I bought a very cheap FIDO2 compliant one, also to be able to demonstrate some of our AAD integration in the future.

image

To be able to hook up my laptop on stage to a cable which provides internet (always try to get a wired connection, never trust Wi-Fi at conferences with tons of people in the rooms using that precious bandwidth you so desperately need while presenting) I use a USB3 hub with 1Gbit ethernet port. This comes in handy if you also want to plug in your USB receiver for your mouse, Yubikey and clicker for example. I use this one and it works great (not for your Mac though!)

image (I have a Satechi, but this seems to be the exact same one)

Whenever you travel and have to present, at a conference or customer. You never know if your laptop will successfully connect to whatever AV equipment is set up. Always be on time and try out what works and what not. It happened dozens of times I could not connect successfully at once or only at a very weird resolution. What has helped me was to use this little adapter.

image

Even if there was a mini display port available, it happened to me I still had to use the HDMI to get the correct connection and audio to work for example. This thing works great on both my surface devices I am bringing. For the Mac, I will carry a USB-C version with VGA, Ethernet, HDMI, and USB. Yes, VGA is still used by a lot of our enterprise customers.

The Logitech presenter has been in my bag for years. Useful to have a remote clicker and on top of that, it has a laser!

image

When traveling it’s always useful to have a battery pack for your mobile. Even cooler and useful is a battery pack which is also a wireless router or bridge. This is the TripMate Titan. I have the 10400mAh version.

image

Besides being able to charge your phone. It can also work as a wireless router. Plug in a network cable and you can wireless connect your devices. Both useful in your hotel! but also on stage when you don’t have good coverage. It works without being powered, but you might want to hook up a USB cable just in case. The device is also capable of creating a wireless connection (to your hotel network) and still use it as a wireless hotspot for your own devices, so they can share the same wireless connection.

In the past, I always threw a US power strip in my suitcase and connected that to the power outlet with a travel adapter. The Mogics Power Bagel is something I haven’t used and bring with me for the first time.

image

It’s very small, has it’s own travel adapter and you can connect 4 plugs and 2 USB devices at the same time. Since it’s round you won’t have a problem to plug in the larger adapters. It extends a little extension cord when you use it as well.

It’s always useful to have a spare ethernet cable handy. For hooking up your laptop in the hotel room or connecting my wireless router to the wall. I bought a set of cable matters retractable ethernet cable since they roll-up so nicely.

image

I also always bring a mouse. It’s just easier for me than a trackpad. The Microsoft Arc mouse is a favorite. Also since it’s flat when you pack it.

image

If you are planning to rent a car. I always bring a car USB charger to be able to charge my phone, especially when you are using Waze for navigation.

Of course, USB cables to charge my phone.

Lastly, I have a set of noise cancellation earphones. I use the Bose QC35 II (No Surface headphone yet). It’s also great to use for Teams calls when you are on the road since it has a microphone as well. Priceless when you sit in a place for 20 hours. I also have a pair of in-ear ones which I can use when I want to sleep.

image

The last thing I pack is my Kindle Paperwhite. Without it it’s really hard to get through all those hours on the plane and nights in the hotel.

image

So what are your most important travel gadgets? Let me know in the comments.

How to detect if your devices are trying to circumvent your pihole

As I described in my previous blog post, you can set up a pi.hole DNS server to optimize your network traffic and your browsing experience. But not every device will be respecting your DHCP DNS settings it seems. Some devices have hardcoded DNS entries and just ignore your settings. Scott Helme wrote on his blog how to redirect those naughty devices and redirect their traffic to your pihole instead.

But before we start doing that I was curious to find how many of those devices I actually had on my network. To figure this out I had to setup my USG firewall to catch the TCP/UDP request on port 53 which are not originating from my pi-hole (on IP address 192.168.1.10). The USG firewall can be configured to log certain events on your firewall (without blocking the actions). This will show up in the log file on your USG. The log file can be found in /var/log/messages. You can view this file with the command:

tail -f /var/log/messages

Depending on your firewall configuration you will see almost nothing or a ton of information coming by. The goal is to capture these kind of events:

Oct 21 17:53:42 USG kernel: [WAN_OUT-2000-A]IN=eth1 OUT=eth0 MAC=80:2a:a8:f0:0a:49:94:9a:a9:23:23:40:08:00 SRC=192.168.1.186 DST=8.8.8.8 LEN=58 TOS=0x00 PREC=0x00 TTL=127 ID=59302 PROTO=UDP SPT=58633 DPT=53 LEN=38

What you see here is a request from IP address 192.168.1.186 doing a DNS request (DPT=53 meaning destination port 53 which is the port a DNS server listens to) to the DNS server at IP address 8.8.8.8.

A legitimate event would look like this:

Oct 21 17:55:05 USG kernel: [WAN_OUT-2000-A]IN=eth1 OUT=eth0 MAC=80:2a:a8:f0:0a:49:b4:fb:e4:8c:32:67:08:00 SRC=192.168.1.10 DST=1.1.1.1 LEN=57 TOS=0x00 PREC=0x00 TTL=63 ID=20414 DF PROTO=UDP SPT=23724 DPT=53 LEN=37

This is a DNS request coming from my pihole server on 192.168.1.10 and it’s configured to forward DNS requests to 1.1.1.1

Let’s set up the firewall to start generating these logs in your log file. I have done this with Unifi version 5.9.29 Go to your cloud key settings page. Click Routing & Firewall. Click on firewall on the top of your screen. Click WAN OUT and click on Create New Rule. This is how my screen looks like:

At the buttom you have to create a new Port group for the Destination. Click on create port group button and create one for DNS like I did below:

Make sure you click on the Add button after you filled in the port number (DNS listens to Port 53) before you hit save. Click Save again, This will cause your USG to be provisioned. SSH into your USG.

To only see all DNS request in your USG log file you can use the following command:

tail -f messages |grep -F “DPT=53 “

This will show any DNS requests going out to the internet, including the ones from your pihole. To only see the naughty devices you can use the following command (another grep, perhaps there is a more efficient way but this worked for me :)) where the IP address is the IP address of your pi-hole:

tail -f messages |grep -F “DPT=53 “| grep -v “SRC=192.168.1.10”

This one takes a while before it starts showing the log, but it worked for me. Now you will only see the DNS requests coming through your USG from your naughty devices. So how do you test this? The following command performs a DNS request and you can add a DNS server where the request is sent. This is a great way to test your setup:

nslookup techmeme.com 8.8.4.4

So far I have only seen a Samsung Galaxy S7 going to a Google DNS server directly. So the devices on my network seem to be well behaved.

 

 

 

Installing pihole on your Cloudkey gen2+

The other day I bought myself a Gen2 cloudkey plus from Ubiquiti and replace my old cloudkey. It comes installed with the Unifi SDN and the new Unifi Protect. The device looks really nice and has a little display which shows you information about the applications running on the device.

image

Since I have been playing with pi-hole lately on one of my Raspberry Pi’s, I was wondering if I could install pi-hole on the cloudkey so I would have everything from my network on a central place. With help of Google I managed to get it working by following the steps below:

First you have to install a DNS server on the cloudkey, since that’s used by the pi-hole software. ssh into your cloudkey and enter the following commands:

sudo –i

apt-get update

apt-get install dnsmasq

Than we can install the pi-hole software. I choose to download the install script and execute it on my device.

cd /tmp

wget -O basic-install.sh https://install.pi-hole.net
bash basic-install.sh

Keep all the defaults. the only thing I had to do was say no to keep the ip address from DHCP since it didn’t copy the IP adres, I entered it myself. During the install the lighttpd webservice will be installed too. This is used by the admin page.

Last thing is to change the default port of the website since that’s already taken by the cloudkey management interface. During pihole install lighttpd was installed

make a backup of the config:

cp /etc/lighttpd/lighttpd.conf /etc/lighttpd/lighttpd.conf.backup

sed -ie ‘s/= 80/= 81/g’ /etc/lighttpd/lighttpd.conf

or use vi/nano to edit the config file and change the server port

restart the webserver

/etc/init.d/lighttpd restart

 

http://<IP>:81/admin should bring up the pi-hole interface

 

Every time you run the pihole install you have to set the port of the webserver back to a non 80 port again

 

Let me know if this works for you or if I forgot to document a step.

New job in the Azure Identity team

Just posted the email to my colleagues and send an email to our wonderful Windows Development MVPs. Today is my last day in Windows (DEP, developer platform team). I am starting a new job in the Azure Identity organisation in the CxP team. I will be working with developers to evangelize and drive adoption of our Azure Active Directory platform. The full job description is below:

 

Senior Program Manager

Azure Active Directory Premium, B2C

The Digital Transformation era is upon us! Applications and data are moving to the cloud; employees want to be productive on devices they love from locations of their choice; organizations want to give seamless access to employees and partners; self-service is in and helpdesks are past. In the middle of all these exciting changes, security breaches are getting more sophisticated by the day. The single common factor in this journey that our customers are undertaking is … Identity.

The @Scale CXP team in the Identity engineering division within Cloud+AI works with partners, developers and customers from all over the world to drive service adoption and we work directly with engineering to shape the product. The best of both worlds!

As Microsoft cloud services adoption continues their rapid growth, Developers play a critical role in helping to drive usage of our services. Developers are at the center of enabling key customer scenarios building solutions ranging from enterprise scale applications and services to niche departmental business process apps. Assuring Developers have the technical skills and Identity developer platform necessary to build and sustain a vibrant Identity business is extremely important to our shared success. Assuring our developers needs are evangelized throughout our engineering organization as part of the engineering lifecycle is critical to our long-term business growth and sustainability.

Responsibilities

In this role you will help drive usage and adoption of the Identity dev platform by supporting awareness and growth of product expertise within the developer ecosystem, and define, build, and execute on engagements with developers to get feedback, evangelize their product needs, and drive enhancements through the engineering lifecycle. This work is instrumental for our business to learn from developers across the globe as we understand how our technology is adopted. Our world evolves at the speed of cloud and we are looking for active learners who can collaborate across a diverse team and global business.

Key Responsibilities:

Evangelize the Identity developer platform and drive its adoption

Drive usage: More active third-party apps built on the Microsoft Identity developer platform getting used more broadly across a larger customer base.

Drive engagement model with B2C developers to grow the inventory of apps in our marketplace, remove technical roadblocks and discuss product roadmaps. Connect with developers at major Microsoft or Industry events and road shows.

Own Technical Enablement and Readiness: Drive Identity dev platform awareness through calls, webinars, office hours, Yammer, training sessions, etc.

Define performance measure to provide our Identity leadership with crisper actionable insights.

Channel Developer feedback to the feature teams to help with prioritization.

Track and improve Developer satisfaction with our platform.

Partner with other Microsoft teams to align with their developer ecosystem strategy.

Regularly report out on impact and opportunities.

Qualifications

Basic Qualifications:

Minimum seven years of work experience in the computer software industry including two years of technical experience in security, cloud, and/or identity solutions.

Bachelor’s Degree in computer science or related discipline, or equivalent experience.

Preferred Qualifications:

Ability to Ramp to L400+ on Identity Platform Technology

Direct experience working with developers is highly desired

Collaboration/ in cross-teaming skills.

Comfortable working autonomously in a fast-paced environment where new challenges exist around every corner.

Ability to prioritize, time management and organizational skills.

Ability to take on complex systems and processes and drive simplification and improvements.

Self-starter, who can deal with ambiguity, maintains focus, drives to clarity and provides innovative solutions.

 

I had a amazing time in Windows. The last year working for one of the best managers I had in my career (thank you Lora!). I am going to miss working with the fantastic Windows Developer community and I hope our paths cross again. I will take some time off before I start the new role. Lots of new things to learn and I can finally talk and blog about my work again, so I expect to take you along, on my blog, during the Azure Identity journey I am about to make.

Adding FlightRadar24 feed to my FlightAware raspberry pi PiAware install

Since a week or so I am running PiAware from FlightAware on 1 of my Raspberries. It’s running fine. Thanks to Chris Johnson I also managed to feed Flightradar24 from the same feed. This are the steps I did on my raspberry through the shell. I don’t run a fancy container solution like Chris does on his setup so I had to steal some configuration and instructions from his github page.

This were the instructions I pasted in my sudo shell window:

To configure the feed type:

Enter your email address, leave the next blank, enter your latitude, enter your longitude, enter your altitude in feet, enter ‘yes’ to confirm and the ini file will be filled in for you.

Finally:

and you are set. You can check the /var/log/fr24feed.log file to see if everything is working correctly.

Creating my config.gateway.json provisioning file for my USG

As described in a few previous blog posts I needed to set some configuration through the command line for my USG. But every time you provision the USG the changes will be lost. This can be solved to store the changes in the config.gateway.json file on my cloud key. Since the cloud key is running Ubuntu I can find that file in /usr/lib/unifi/data/sites/default (your site can be named differently, but mine is the default).

This is my current configuration, it both contains the IPV6 configuration for Comcast and my VPN routing information. Line 74-89, Line 135-173 are the lines specific to my source address based routing setup.

What I did to create this file was login to my USG, via configure set 1 configuration file. Entered the command mca-ctrl -t dump-cfg to see what the config looked like and copied the correct node into the file. After saving I did a forced provisioning of the USG from the UI and checked if it worked (show configuration).

Configuring source address based routing on my Unifi USG

Updated 10/24/2018 since routing didn’t work anymore. You have to disable source-validation, thanks to Roelf for the comment with the correct command.

For some time now I wanted to be able to test some network stuff. I want to be able to connect certain devices over a VPN to the Netherlands but without the need to configure every client with VPN connections.

With this scenario it is possible to test different geo stuff accessing my network from different places in the world, it also helps me test the different latencies when going across the ocean and back. It also could be used to access certain video services in another country or access a different Netflix catalog, but I would never use it for something like that obviously Smile

After reading up on the different forums and asking some questions I was able to configure my USG in a way which gives me the most flexibility possible for my scenario. This is the step by step guide how to configure your USG and network so all your network on that special network will be routed over the VPN connection to the Netherlands.

The first step is to configure my ‘hoekstraonline NL’ network as described in this blogpost. Connecting through my ‘hoekstraonline NL’ wireless network and specific ports on my router (tagged with the same VLAN 100) will be the basis of my configuration. I want all that network going over the VPN connection to NL. All me regular traffic will go over my Comcast connection as usual but machines connected to that wireless network and specific ports on my routers will be routed over the VPN connection.

So lets create the VPN Client network first. Nowadays this can be done through the UI (I am running Unify version 5.6.20 stable candidate when I am writing this)

VPNClient

After you create this network you can check on your USG how the routing table looks like. It should have added the VPN NL network. Enter the following command on your USG (via SSH):

ubnt@USG:~$ netstat -r

My routing table looks like this:

netstat2USG

the pptpc0 interface is the VPN connection I just defined, you can see from the flags the connection is up (U). The eth1.100 is the virtual network which was added in the previous blogpost.

The next step is to change the routing depending on the source address. Unfortunately this can’t be done through the GUI from Unifi. They add more and more functionality every month, but this has to be done through the command line. so fire up your bash shell or putty and connect to your firewall (USG in my case).

In the shell type; configure

ubnt@USG:~$ configure
[edit]

We have to define a new routing table we call table 1 which will route traffic to my VPN connection on the 10.0.0.0/24 network.

ubnt@USG# set protocols static table 1 route 0.0.0.0/0 next-hop 10.0.0.1
[edit]

Now we have to define the modify policy. A modify policy allows us to modify various items when the rule matches. So if the source address came from 192.168.2.0, then we want to use routing table 1:

ubnt@RTR# set firewall modify SOURCE_ROUTE rule 10 description ‘traffic from eth1.100 to VPN NL’
ubnt@RTR# set firewall modify SOURCE_ROUTE rule 10 source address 192.168.2.0/24
ubnt@RTR# set firewall modify SOURCE_ROUTE rule 10 modify table 1

Now we need to apply this policy to the interface. When it comes to applying a policy to an interface, it needs to be done on the input interface before the routing lookup takes place.

ubnt@USG# set interfaces ethernet eth1 vif 100 firewall in modify SOURCE_ROUTE

A last step which you need to add (this changed so this step was added 10/24/2018) is to disable source validation (thanks to Roelf for the comment and help)

ubnt@USG# set firewall source-validation disable
[edit]

After this you can give the commit and save command and you can test your network routing. From a client in the 192.168.1.x range nothing should be different. But when you test it from a 192.168.2.x client you see the traceroute change to the 10.0.0.1 hop and than off to the Netherlands.

The first tracert is from a machine in the 192.168.1.x range. You see the first hop is my USG gateway and than it goes out to the internet.

The second tracert is from the machine when it’s in the 192.168.2.x range. You see the second hop goes through the 10.0.0.0 VPN gateway and you also see the response times go up since it’s traveling the ocean now.


Mission accomplished!.

The last step is to add these settings to the provisioning script stored on my cloudkey, so when I reset the USG the settings won’t be lost.

One of the sources I used to write this article.