I am a dedicated engineer with experience in security, devops, software development, systems administration, sales and leadership.
My background is in support & maintenance of production environments for high-profile companies and Government agencies. More recently, I have been leading development of highly available, continuously-deployed security software running on AWS. I now lead the Security Engineering team at Xero.
More recently, I have been developing highly available, continuously-deployed security software on top of AWS. I support graduates and interns as a software development lead.
I hold current AWS Solutions Architect, AWS SysOps Administrator and Security Specialist certifications as well as hands-on knowledge in a range of computer technologies. These include:
I have written container pipelines for multiple companies from large enterprises to small startups. I am well versed in AWS Elastic Container Service, CodePipeline, CloudWatch, Lambda and the associated IAM access control paradigm.
Continuous Delivery and Continuous Integration pipelines are a particular area of specialty. I have written fully automated workflows for network, cluster and service-level infrastructure, as well as automated deployments of the containers themselves, with change triggers from their source repositories.
I have deployed secure pipelines within closed networks, written custom action types and integrated services into CodePipeline that are not available off-the-shelf from Amazon.
PACMAN is a self-service and automation tool designed to make managing thousands of users' access to platform systems much easier
Developed in my time as a Security Engineer at Xero, PACMAN has integrations to various systems to understand and verify users and their current state in the platform. PACMAN uses an automated workflow system written in Node JS to interact with Zendesk and record user actions and requests, just as a human agent would. This allows for a clear and concise audit trail.
The frontend application is written in React using libraries like Redux. The backend runs completely in containers, with a collection of Python and Node JS containers providing various services. The system runs on top of AWS ECS and has a fully automated build and deployment pipeline. Realtime monitoring is configured for AWS CloudWatch and SumoLogic.
Nagios is great for monitoring large amounts of dispersed infrastructure. What it lacks is a modern web frontend. To allow for faster lookups of time-based information I integrated data from the Nagios JSON API into a custom frontend and node.js backend, synchronizing with the Nagios database and allowing in-depth drilldown on status over time.
Due to an office merger & relocation, I was tasked with migrating our offices' infrastructure into the new premesis. I prepared by auditing the network and gathering information from colleagues. Re-architecture of the network was necessary to merge the networks with that of another office - with completely different topologies, subnets & domains. I designed and documented a new network architecture. Once moved, I ensured key services and backbone infrastructure like NAS, networking, VPN and VM pools were working as intended, with as little downtime as possible.
The end of support for Windows Server 2003 in July 2015 meant we needed to migrate our older Windows Server VMs to 2012. This included domain controllers and file servers for home drives and group shares. I implemented the multi-terabyte migration in a way that maintained high availability. Snapshots were completed leading up to the migration so that only a small amount of changed data had to be copied during the outage, meaning we were back online within the hour. The process included analysis of file permissions to ensure a complete copy was made.
Built to replace an ageing PHP-based calendar system housing tens of thousands of appointments across twenty resources. The system consists of a web app, provided by a node.js server-side application, which pulls data from a CALDAV server. The application makes use of HTML5 WebSocket, allowing for real-time information on room availability.
I leveraged the charting library d3.js to provide business managers with visualisation of room usage information which is drawn directly from the CALDAV server's booking information.
The browser front-end is written to be as clear and user-friendly as possible. I spent significant time with key stakeholders to achieve dynamic booking forms that guide the user, while collecting information as needed by the resource support teams.
When I began my employment at the University of Otago, all updates on OS X clients were being deployed manually. This was very time consuming as it meant having to log into each of up to a hundred machines. I implemented an automated update deployment system that downloads, packages and tags the update for testing. It then takes one click to move the update to production, and have the client machines install the patch automatically.
Additionally, the system allows end users the ability to install pre-approved optional applications on demand, reducing help-desk calls for common software installs. The system as a whole receives praise for its reliability and usability, and is still in use today.
To improve efficiency and reduce human error when deploying machines, I implemented a heavily automated build process. Leveraging slipstreaming and MDM configuration profiles I was able to strip up to 3 hours from the process of getting a machine out of the box and on to the end user's desk.
To complement this I documented a quality control procedure and implemented a data collection process which halved the amount of paperwork required, whilst still capturing the same amount of information.
Jerry was a semi-conversational chatbot built to assist with queries to the system engineering team. He lived in our office chatroom and told jokes, inspired the team with motivational quotes and spammed us with cat pictures. He was also programmed to tell us information on room bookings, software versions in deployment, machine group info, network speed and server uptime. Jerry was written on node.js and could connect to Slack, XMPP, as well as chat through a retro-themed web interface.
Rupert is an excellent and competent professional to deal with. It is easy to see the energy and drive behind his passion for systems and computers. He offers great service, prompt contact and invaluable knowledge and support. I would recommend him to anyone wanting quality assistance. - Lapo Ancillotti - Project Manager, Designer, R&D Coordinator
I had the good fortune to work with Rupert. He always demonstrated genuine thought leadership and a talent for managing teams and relationships. He also has a profound understanding of business development, business issues and the bridge with technology to develop business solutions. - Israel Reyes - Founder & Managing Director, Solity Software Ltd.
Rupert is an excellent worker and an awesome 2IC - I report directly to him on saturdays. He is very knowledgeable and efficient with his work and a great person to work with. - James O'Neill - Previously Sales Consultant, Yoobee Wellington
Rupert was a very knowledgeable Guru who was always helping to improve the Yoobee business and drive sales. His ability at building rapport and helping customers with their software and hardware needs was fantastic and he was always willing to step outside of his job description and help the team whenever needed. His attitude and drive to succeed made him effortless to manage and a great asset to the team. - Alicia Rameka - Previously Store Manager, Yoobee Wellington
In my spare time I enjoy writing music, learning new skills, and playing games like Civilization, 7 Days to Die, Company of Heroes and PUBG.
I play drums, piano and produce electronic music. I have entered my music into the 48 Hours Film Festival a number of times, working with a talented team of film makers.
Recently I have been learning skills in game & 3D design, using programs like World Machine & Terragen to generate photo-realistic scenery and heightmaps.