Skip to main content

How to run Juniper Firefly (vSRX) on KVM -- SRX in a box setup

Juniper has released a virtual form factor SRX called Firefly Perimeter (vSRX). It provides security and networking features of the SRX Series Gateways in a virtual machine format. It can be spawned as a VM on a KVM+QEMU/VMWare hypervisor running on a X86 server.


This post will give details on how to set it up as a standalone SRX box which can be used in any of your network deployments just like a normal SRX.

Pre-requisites

  1. Have an X86 server with atleast 4 GB ram, 4 GB harddisk space and two ethernet ports.
  2. Install Ubuntu 14.04 on it (Centos should also work provided KVM related changes are taken care of)
  3. Assumption: You have logged into the system as root user.

Get the Software

Firefly Perimeter can be download as a part of Juniper's software evaluation program and can be tried out for 60 days. You will need a Juniper account to download it here. For the purpose of this post I will be using the appliance at "Firefly KVM Appliance - FOR EVALUATION".

Configure the Server

Firefly needs the following software to be installed in order to work properly:
  • qemu-kvm
  • Libvirt
  • OpenvSwitch
  • Virtual Machine Manager
  • Bridge utils
You can install all of the above by running the command:
 
apt-get install qemu-kvm libvirt-bin bridge-utils \
                virt-manager openvswitch-switch

Firefly Perimeters requires a storage pool configured on the KVM and virtual networks defined before it could be spawned.

Creating a Storage Pool on KVM

 I am using a directory based storage pool for my example. If you want to try out other option you can check them out here.


mkdir /guest_images
chown root:root /guest_images
chmod 700 /guest_images
virsh pool-define-as guest_images dir - - - - "/guest_images"
virsh pool-build guest_images
virsh pool-autostart guest_images
virsh pool-start guest_images

Creating the virtual Networks

As shown in the figure for this deployment, I will be creating two virtual networks and assigning them to Firefly. For this purpose, we will create two XML files with the corresponding network description and then will execute virsh commands to create these networks.


dut.xml
<network>
  <name>data</name>
  <bridge name="br_data" />
  <forward mode="bridge" />
  <virtualport type='openvswitch'/>
</network>

mgmt.xml
<network>
  <name>mgmt</name>
  <bridge name="br_mgmt" />
  <forward mode="bridge" />
</network>

After creating the xml, execute the following commands:
bash# virsh
virsh# net-define mgmt.xml
virsh# net-autostart mgmt
virsh# net-start mgmt

virsh# net-define dut.xml
virsh# net-autostart dut
virsh# net-start dut

Create the bridges

We need to create two bridges: br_mgmt and br_data and add eth0 and eth1 to them as shown in the figure above. 

br_mgmt (linux bridge)
brctl addbr br_mgmt
brctl addif br_mgmt eth0

br_data (Ovs Bridge)
ovs-vsctl add-br br_data
ovs-vsct add-port br_data eth1

Now we need to move the server host ip from eth0 to br_mgmt

vi /etc/network/interfaces
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto br_mgmt
iface br_mgmt inet static
address xx.xx.xx.xx
netmask 255.255.xxx.0
gateway xx.xx.xx.xx
dns-nameservers xx.xx.xx.xx
#pre-up ip link set eth0 down
pre-up brctl addbr br_mgmt
pre-up brctl addif br_mgmt eth0
post-down ip link set eth0 down
post-down brctl delif br_mgmt eth0
post-down brctl delbr br_mgmt
Restart the networking service by calling /etc/init.d/networking restart

Spawn the VM

Once the storage pool and necessary virtual networks are ready, we can spawn the Firefly VM on the hypervisor using the command:

bash -x junos-vsrx-12.1X47-D10.4-domestic.jva MySRX -i 2::mgmt,data -s guest_images
virsh# start MySRX

You can also use Virtual Machine Manger to start the VM, like so:


In the next post, I will continue on this and give details on initial SRX setup and testing it out.

Comments

Rosh said…
Where is the next post on setting up SRX? :)

Popular posts from this blog

Openstack : Fixing Failed to create network. No tenant network is available for allocation issue.

Assumptions : You are using ML2 plugin configured to use Vlans If you try to create a network for a tenant and it fails with the following error: Error: Failed to create network "Test": 503-{u'NeutronError': {u'message': u'Unable to create the network. No tenant network is available for allocation.', u'type': u'NoNetworkAvailable', u'detail': u''}} The problem can be due to missing configuration in the below files: In /etc/neutron/plugins/ml2/ml2_conf.ini network_vlan_ranges =physnet1:1000:2999 (1000:2999 is the Vlan range allocation) In /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini bridge_mappings = physnet1:br-eth1 (in OVS we map the physical network to the OVS bridge) Note You should have created a bridge br-eth1 manually and mapped it to a port ovs-vsctl add-br br-eth1 ovs-vsctl add-port br-eth1 eth1 Once configuration is done, restart the neutron ovs agent on the compute node(s):

Solved: Fix for Git clone failure due to GnuTLS recv error (-9)

My devstack installation was failing with an error reported by the GnuTLS module as shown below: $ git clone https://github.com/openstack/horizon.git /opt/stack/horizon --branch master Cloning into '/opt/stack/horizon'... remote: Counting objects: 154213, done. remote: Compressing objects: 100% (11/11), done. error: RPC failed; curl 56 GnuTLS recv error (-9): A TLS packet with unexpected length was received. fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed The following Git config changes fixed the issue for me. Am hoping it will be useful for someone out there: $ git config http.sslVerify false $ git config --global http.postBuffer 1048576000

QuickBite: Tap Vs Veth

Linux supports virtual networking via various artifacts such as: Soft Switches (Linux Bridge, OpenVSwitch) Virtual Network Adapters (tun, tap, veth and a few more) In this blog, we will look at the virtual network adapters tap and veth. From a practical view point, both seem to be having the same functionality and its a bit confusing as to where to use what. A quick definition of tap/veth is as follows: TAP A TAP is a simulated interface which exists only in the kernel and has no physical component associated with it. It can be viewed as a simple Point-to-Point or Ethernet device, which instead of receiving packets from a physical media, receives them from user space program and instead of sending packets via physical media writes them to the user space program. When a user space program (in our case the VM) gets attached to the tap interface it gets hold of a file descriptor, reading from which gives it the data being sent on the tap interface. Writing to the file descri