Learning about LLVM

In a previous post, I talked about TCO (Tail Call Optimization) and was not sure how Rust handled it. Some one on reddit responded and noted that Rust does not do anything with this, but the LLVM does. So that made me want to learn more about LLVM. The code related to this post can be found here.

Comment
byu/rushtonality from discussion
inrust

Overview of LLVM project

So what is LLVM. Here is a quote from the website.

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines. The name “LLVM” itself is not an acronym; it is the full name of the project.

Source: https://llvm.org/

So my short description of the project based on what I have learned so far, is that previously most compilers reinvented the wheel for a lot of their functionality and a lot of project had very similar but different tooling. The idea of this project was to abstract out the various components of the tool chain. This would then allow writers of a new language or compiler to focus on their language and not have to focus on optimization and architecture specific logic.

Below is a nice diagram from “Intro to LLVM” by Chris Lattner from the documentation page in the LVM project.

This is some of the motivations behind Java when it was created, but LLVM is not a VM in the same way the JVM. LLVM bytecode can be run using a JIT/Interpreter, but also can be built into a platform dependent executable. So LLVM code is not build once run anywhere.

So in writing a compiler for a new language, you can focus on parsing and translating your new language to LLVM IR, and then use existing tools in the LLVM tool chain to provide optimizations and target a backend.

Some of the key terms that I will be using here are the following:

LLVM IR – LLVM Intermediate Representation – Text/Human readable language that is some ways is similar to your typical architecture assembly language.

LLVM BC – LLVM Byte Code – Binary version of the above.

Commands That I Use Below

llvm-as – LLVM assembler

This program assembles .ll text files to .bc binary files.

It reads a file containing human-readable LLVM assembly language, translates it to LLVM bitcode, and writes the result into a file or to standard output.

Source: https://llvm.org/docs/CommandGuide/llvm-as.html

llvm-dis – LLVM disassembler

This program disassembles .bc binary files to .ll text files.

The llvm-dis command is the LLVM disassembler. It takes an LLVM bitcode file and converts it into human-readable LLVM assembly language.

Source: https://llvm.org/docs/CommandGuide/llvm-dis.html

llc – LLVM static compiler

The program compiles either .bc or .ll files into architecture assembly language .s text files.

The llc command compiles LLVM source inputs into assembly language for a specified architecture. The assembly language output can then be passed through a native assembler and linker to generate a native executable.

Source: https://llvm.org/docs/CommandGuide/llc.html

lli – directly execute programs from LLVM bitcode

Runs either .bc or .ll directly without statically compiling.

It takes a program in LLVM bitcode format and executes it using a just-in-time compiler or an interpreter.

Source: https://llvm.org/docs/CommandGuide/lli.html

Example with C

Source for test

I decided to start off with a very trivial example with C to see what I would be seeing. So I started with this program:

#include <stdio.h>

int main() {
	printf("Hello World!\n");
	return 0;
}

Generate the IR code

When I compiled this to emit the LLVM representation.

clang -S -emit-llvm main.c

The result of that is this very concise blob of code, that makes sense because C has always being a small abstraction away from assembly.

; ModuleID = 'main.c'
source_filename = "main.c"
target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-apple-macosx10.14.0"

@.str = private unnamed_addr constant [14 x i8] c"Hello World!\0A\00", align 1

; Function Attrs: noinline nounwind optnone ssp uwtable
define i32 @main() #0 {
  %1 = alloca i32, align 4
  store i32 0, i32* %1, align 4
  %2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0))
  ret i32 0
}

declare i32 @printf(i8*, ...) #1

attributes #0 = { noinline nounwind optnone ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" }

!llvm.module.flags = !{!0, !1}
!llvm.ident = !{!2}

!0 = !{i32 1, !"wchar_size", i32 4}
!1 = !{i32 7, !"PIC Level", i32 2}
!2 = !{!"Apple LLVM version 10.0.0 (clang-1000.11.45.5)"}

Run the code

I can then run this code directly using the following:

lli main.ll 
Hello World!

You can call transform the code to the binary byte code format by using the following:

llvm-as main.ll 

Generate Assembly Code

Then if you want to see the actual platform specific assembly code, then you can run the following, you can see the results from running this on my Mac here.

llc main.ll

Example with Rust

Since I have been learning Rust recently, I decided to have a similar test in Rust.

Source for test

$cargo new rust-llvm
     Created binary (application) `rust-llvm` package
fn main() {
    println!("Hello, world!");
}

Generate the IR code

rustc --emit llvm-ir src/main.rs

You can see the IR code here.

Run the code

This is another link error, but I was unable to find a quick solution to pull in the dependencies from Rust, so I did not pursue if very far. I did find this and this that may lead somewhere, but I did not spend a huge amount of time on this.

lli main.ll 
LLVM ERROR: Program used external function '__ZN3std2io5stdio6_print17hf28330b8a1759fecE' which could not be resolved!

Generate Assembly Code

You can see the results from running this on my Mac here.

llc main.ll

Example with Swift

One of the other big projects that uses LLVM is Swift. Since I am writing this post on my Mac, I decide to do the same quick test with Swift.

Source for test

$swift package init --type executable
Creating executable package: swift-llvm
Creating Package.swift
Creating README.md
Creating .gitignore
Creating Sources/
Creating Sources/swift-llvm/main.swift
Creating Tests/
Creating Tests/LinuxMain.swift
Creating Tests/swift-llvmTests/
Creating Tests/swift-llvmTests/swift_llvmTests.swift
Creating Tests/swift-llvmTests/XCTestManifests.swift
print("Hello, world!")

Generate the IR code

swiftc -emit-ir main.swift

You can see the IR code here.

Run the code

lli main.ll 
LLVM ERROR: Program used external function '_swift_bridgeObjectRelease' which could not be resolved!

I go an error. that is a link error missing and external dependency. After adding a couple of additional libraries, I can get the expected output.

lli \
    -load=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx/libswiftCore.dylib \
    -load=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx/libswiftSwiftOnoneSupport.dylib \
    main.ll

Hello, world!

Generate Assembly Code

You can see the results from running this on my Mac here.

llc main.ll

Example with C++

Just to round out this post, I decided to do a C++ example.

Source for test

#include <iostream>

int main() {
	std::cout << "Hello world!" << std::endl;
	return 0;
}

Generate the IR code

clang++ -S -emit-llvm main.cpp

You can see the IR code here. I was a bit surprised by how verbose this code is. It was by far the largest of the four tests.

Run the code

lli  main.bc 
Hello world!

Generate Assembly Code

You can see the results from running this on my Mac here.

llc main.ll

Future Areas to Research

One of the areas that I would like to spend some more time researching, is to try to understand what optimizations are already being done in the examples that I show above. The secondly, to learn how to take IR code generated by some language and send it through some optimization passes to see what happens. That may be a topic of a future post.

My Tour of Rust – Day 4 – Ray Tracing Part 2

It has been three weeks since the last installment of my 2-3 post a week blog. As I noted in an earlier post. I am committing to myself to spend at least an hour per day learning new things and writing them up in this blog. I decided that I wanted to complete the “Ray Tracing in One Weekend” book for this post. The previous post can be found here. I underestimated the new Rust concepts and conventions that would be required to do this and how long it would take to write up my experience. So now these weeks later, I am finally finished.

I have also made a decision to move on from Rust for a while and pursue some other interests in up coming posting. I have found myself really loving Rust. I really appreciate all of the compile time checks that can prevent problems later. There were only a couple of times that I got a good compile, that I did not get the result I was expecting when I ran the program.

This exercise really forced me to understand Ownership and borrowing more thoroughly then.

Chapter 3 & 4 – Splitting code into modules

My first objective in starting the rest of the chapters of the book was to figure out how to structure a project with multiple files and modules.

The first thing that I did was start creating a separate file to split different functionality. That turned out being fair straight forward. It was just forward declaring a module with the same name as the file from the the main.rs file.

pub mod raytrace;

Then I wanted to split things even more and create a tree structure of directories. So now I created a raytrace sub-directory, but I not sure what to do next, because if I tried to do the mod statement above, then all of my modules were not found. Below is the file structure that I had.

src/main.rs
src/raytrace
src/raytrace/ray.rs
src/raytrace/material.rs
src/raytrace/vec.rs
src/raytrace/camera.rs

Then I searched and found this page about splitting. So I saw that I needed a file mod.rs that would act like raytrace.rs did in my first step. Then in that file I could bring in the other nested modules as below.

pub mod camera;
pub mod material;
pub mod ray;
pub mod vec;

Chapter 5 – Traits, Collections and Polymorphism

Traits

This chapter introduced the very common object orient concept of inheritance and polymorphism.

class hitable {
    public:
        virtual bool hit(const ray& r, float t_min, float t_max, hit_record& rec) const = 0;
};

class sphere: public hitable  {
    public:
        sphere() {}
        sphere(vec3 cen, float r, material *m) : center(cen), radius(r), mat_ptr(m)  {};
        virtual bool hit(const ray& r, float tmin, float tmax, hit_record& rec) const;
        vec3 center;
        float radius;
};

class hitable_list: public hitable  {
    public:
        hitable_list() {}
        hitable_list(hitable **l, int n) {list = l; list_size = n; }
        virtual bool hit(const ray& r, float tmin, float tmax, hit_record& rec) const;
        hitable **list;
        int list_size;
};

So now I needed to create my first trait. Here I decided to do a direct port of the C++ code, and used an out parameter (&mut HitRecord) to return the results. I decision causes me many headaches later, as discussed below.

pub trait Hitable {
    fn hit(&self, r : ray::Ray, t_min : f32, t_max : f32, rec : &mut HitRecord) -> bool;
}

Polymorphism

The following code, does not work and that would seem obvious, but how do I have a polymorphic pointer or reference.

    let a : Hitable = Sphere::new(...);
    let b : Hitable = HitableList::new(...); 

So I thought about this, but how is ownership going to work here. This still did not feel right.

    let a : &Hitable = &Sphere::new(...);
    let b : &Hitable = &HitableList::new(...);

Then I found out about Box. This seems to mostly parallel the idea of a pure pointer in C++ and the memory is allocated in the heap, so that seemed to be what I needed.

Collections

With this knowledge, I can now create the my list. I decided to use Vec as my underlying collection. This left me with the following for HitableList.

pub struct HitableList {
    list : Vec<Box<Hitable>>,
}

Chapter 6 – drand48() – Crates

In this chapter, the author started using a random number generator. I need to figure out how to do this in Rust. Searching the Internet I found the solution, but the common solution was to use a 3rd party crate to provide this functionality, so I got the experience the process of adding a new dependency.

Adding dependency to the Cargo.toml file

[dependencies]
rand = "0.6"

Using the external library in the code.

extern crate rand;
use rand::Rng;

pub fn drand48() -> f32 {
    let rand_float : f32 = rand::thread_rng().gen();
    rand_float
}

Pretty straight forward and similar to maven and gradle in the Java world, but not quite as trivial as being able to search add using the npm command in the NodeJS/Javascript world.

Chapter 7 – Nothing new related to Rust

This chapter was fairly uneventful regarding learning new Rust concepts.

Chapter 8 – HitRecord and Material

This chapter is where I hit a wall, and learning Rust got real! This book uses the common C/C++ idea of passing an out parameter of a reference and then using that to get the output of the function. Starting back in chapter 5, I was doing a direct port of this in Rust and it was working, until I now. When a pointer to material was added to the hit record.

Until now, I was adding the copy functionality to all of my structs, included Vec3, Ray, and HitRecord. Like below.

#[derive(Copy, Clone, Debug)]
pub struct HitRecord {
    pub t : f32,
    pub p : vec::Vec3,
    pub normal : vec::Vec3,
}

However with the following change in the C++ code, this became a problem, because now that the polymorphic Box was added I could no longer have this since Box forbids you from deriving Copy.

struct hit_record
{
    float t;
    vec3 p;
    vec3 normal;
    material *mat_ptr
};

Now I have the following struct, but I no long get the relative ease/laziness of not having to worry about ownership and moving out of contexts.

pub struct HitRecord {
    pub t : f32,
    pub p : vec::Vec3,
    pub normal : vec::Vec3,
    pub material : Box<Material>,
}

Ownership and Borrowing

One of the key concepts of Rust is the idea of ownership of memory. Rust tries to solve some of the core issues with languages like C++ where memory leaks and corruption can happen fairly easily with out using a Garbage collection mechanism. There are way to do reference counting in Rust, but by default you should be able to determine at compile time if ownership is being honored correctly.

Moving vs Copying

Previously with the Vec3, Ray, and HitRecord classes, I did not have to worry about this much since they could be copied. Now if I did not pass by reference, I was doing a move.

Out References

Previously I mentioned that I decided to do a direct port of the C++ code in regards to an out parameter. Here I decided that a better approach would be to just return the struct that I created in the function and let Rust’s move logic deal with the passing of the ownership of the struct out of the method.

Option returns to handle nulls

So now that my hit function returns/moves the struct out of the function, what happens when I need to return a null pointer. This is where the Option enum comes into play.

Final implementations using Option

impl Hitable for Sphere {
    fn hit(&self, r : ray::Ray, t_min : f32, t_max : f32) -> Option<HitRecord> {
        let oc = r.origin() - self.center;
        let a = vec::dot(r.direction(), r.direction());
        let b = vec::dot(oc, r.direction());
        let c = vec::dot(oc, oc) - self.radius*self.radius;
        let discriminant = b*b - a*c;
        if discriminant > 0.0 {
            let temp = (-b - f32::sqrt(b*b-a*c))/a;
            if temp < t_max && temp > t_min {
                let t = temp;
                let p = r.point_at_parameter(t);
                let normal = (p - self.center) / self.radius;
                return Some(HitRecord{t, p, normal, material: self.material.clone()});
            }
            let temp = (-b + f32::sqrt(b*b-a*c))/a;
            if temp < t_max && temp > t_min {
                let t = temp;
                let p = r.point_at_parameter(t);
                let normal = (p - self.center) / self.radius;
                return Some(HitRecord{t, p, normal, material: self.material.clone()});
            }
        }
        None
    }
}
impl Hitable for HitableList {
    fn hit(&self, r: ray::Ray, t_min: f32, t_max: f32) -> Option<HitRecord> {
        let mut result : Option<HitRecord> = None;
        let mut closest_so_far = t_max;

        for a in &self.list {
            let temp_result = a.hit(r, t_min, closest_so_far);
            match temp_result {
                Some(rec) => {
                    closest_so_far = rec.t;
                    result = Some(rec);
                }
                None => {}
            }
        }
        result
    }
}

Chapter 9-12 – Again, not much new related to Rust

Once I got through chapter 8 and all of the hurdles there, the rest of the book was pretty straight forward as far as the coding was concerned. The last chapter had an exercise to generate the images from the cover of the book. Here is my image.

Chapter 12 Image

Concluding

The full code for this can be found here. Like I noted in the start of this post. This will likely be my last post on Rust for a while. I plan to branch off into some other technologies. I hope someone has found this useful, because it has been very valuable for me in forcing me to document what I have been learning. Thank you for taking time to read and follow me on twitter @rushtonality for future updates and anything else I my find interesting.

My Tour of Rust – Day 3 – Ray Tracing Part 1

My blog updates have slowed down over the last week. I have been dealing with a cold. I still hope to have two posts a week, but at a bare minimum one post a week.

In planning the next steps for this series, I wanted to provide more structure to my learning, so I decided to work on an actual project. I follow Peter Shirley on twitter @Peter_shirley and I have been meaning to go through his book Ray Tracing in a Weekend. So I decided to kill two birds with one stone and use that as a structure to learning some Rust.

I will just be covering the first two chapters of the book in this post, since just these two chapters required me to learn many new concepts in Rust. I provide the first code example from the book, but will not include any of the rest of the code. Please get Peter’s excellent book if you are interested.

Chapter 1

The first chapter is very basic. Write simple program that prints data for a PPM formatted file to stdout.

On OSX, the file is viewable by default, but on Windows, you will need
a program or you can use this online viewer. Below is the starting code in C++ that I am implementing in Rust.

#include <iostream>

int main()
{
	int nx = 200;
	int ny = 100;
	std::cout << "P3\n" << nx << " " << ny << "\n255\n";
	for (int j = ny - 1; j >= 0; j--) {
		for (int i = 0; i < nx; i++) {
			float r = float(i) / float(nx);
			float g = float(j) / float(ny);
			float b = 0.2;
			int ir = int(255.99*r);
			int ig = int(255.99*g);
			int ib = int(255.99*b);
			std::cout << ir << " " << ig << " " << ib << "\n";
		}
	}
}

Working with ranges in for loops

Like I discussed in a previous post, for loops in Rust can work with either and iterator or a ranges. One nice thing about ranges is that also have “rev” function so you can reverse through the loop without the, in my opinion, awkwardness that is required to do the same in C++ or Java.

for j in (0..ny).rev() {
    for i in 0..nx {

Type Casting

Another thing that this block of code required me to learn, was how type casting works in Rust. It is handled with the “as” notation. I find this a bit clunky compared with cast in other languages, especially with requiring extra parentheses in a complex expression.

fn main() {
    let nx = 200;
    let ny = 100;

    print!("P3\n{} {}\n255\n", nx, ny);
    for j in (0..ny).rev() {
        for i in 0..nx {
            let r = i as f32 / nx as f32;
            let g = j as f32 / ny as f32;
            let b :f32 = 0.2;
            let ir = (255.00*r) as i32;
            let ig = (255.00*g) as i32;
            let ib = (255.00*b) as i32;
            print!("{} {} {}\n", ir, ig, ib);
        }
    }
}

Now let’s run the example. I am using the release option, to compile and run the optimized version of the code instead of the debug version. This was pointed out to me as feedback from my previous post.

cargo run --release > image.ppm

Success, here is the image produced.

Chapter 2

This chapter is focused on creating a 3D vector class that will be used throughout the book. This chapter presented a lot of learning opportunities in porting the code to Rust.

Structs and Methods

One of the first things that I learned here is that there are not classes in Rust in the same sense as Java and C++. Instead there are structs, just like C and more recently Go, and they just contain the data. Then functions are added on passing a reference to “self” to provide the equivalent of methods. Another implication of this, is there is no inheritance from traditional OOP. I will talk about Traits later. The resulting struct needed for this chapter is trivial.

pub struct Vec3 {
    e : [f32; 3],
}

Traits

The power of Rust comes when adding traits. Traits can be thought of as Interfaces from Java, but they can also provide a default implementation so they can do more. There are a couple of ways to use traits. The one below adds default behavior of Copy and Clone that just duplicates the memory of the struct and then adds the Debug attribute that prints in the contents of the struct.

#[derive(Copy, Clone, Debug)]
pub struct Vec3 {
    e : [f32; 3],
}

Custom Constructors

In Rust there are only two constructors, you either create the struct with or without initialization. Instead of using these constructors, there is a convention in rust to create a “class level” function. There is some guidance provided in the style documentation.

impl Vec3 {
    pub fn new(e0: f32, e1: f32, e2: f32) -> Vec3 {
        Vec3 { e: [e0, e1, e2] }
    }
}

Operator Overloading

In the vec3 implementation in the book, he makes use of the C++ operator overloading functionality. I started by looking at operator overloading in Rust. Then I had to find the documentation for the Traits here.

The one thing that tripped me up for a bit was function overloading that was seemingly need based on the following operators.

inline vec3& operator*=(const vec3 &v2);
inline vec3& operator*=(const float t);

However after additional research, I found that you can do it by explicitly specifying the type of the Right Hand expression as seen below. This provides the needed information to the compiler to handle the type coercion.

impl std::ops::MulAssign<Vec3> for Vec3 {
    fn mul_assign(&mut self, other: Vec3) {
        self.e[0] *= other.e[0];
        self.e[1] *= other.e[1];
        self.e[2] *= other.e[2];
    }
}

impl std::ops::MulAssign<f32> for Vec3 {
    fn mul_assign(&mut self, other: f32) {
        self.e[0] *= other;
        self.e[1] *= other;
        self.e[2] *= other;
    }
}

Ownership and Borrowing

During the process of working of this I got the following error. This lead me to do additional reading about ownership and borrowing. I am very comfortable with memory management in C++ and differences between references and copy constructors, along with const correctness, but I am still not 100% comfortable with these concepts in Rust yet. I may write a followup post digging into this topic more in depth.

cargo test --no-run --package rtinw_ch2 --bin rtinw_ch2 tests
   Compiling rtinw_ch2 v0.1.0 (/my-tour-of-rust/day3/rtinw_ch2)
error[E0382]: borrow of moved value: `v`
  --> src\main.rs:43:13
   |
43 |         v / v.length()
   |         -   ^ value borrowed here after move
   |         |
   |         value moved here
   |
   = note: move occurs because `v` has type `Vec3`, which does not implement the `Copy` trait

Final Code

The full source code for my solution can be found here.

Summary

This post took much more time that I was originally planning. I did find it very rewarding and challenging, but I am not sure if I should focus on smaller more focused posts or continue to be more free flowing. Please let me know what you think and consider giving me a follow over on twitter @rushtonality.

My Tour of Rust – Day 2 – Write some code

OK, now that we have an environment created, let’s start writing some code. For this project I will be using Visual Studio Code, but have also tried IntelliJ IDEA with the Rust Plugin.

Step 1 – Create and project and run it

There is an associated github project to parallel this set of blog posts. There will be a directory for each day’s post. Then there will also be tags for each step of each day. Here is the code for this step. I will not always be including these links, but I will try to were the code is more complex or interesting.

$cargo new --bin day2
     Created binary (application) `day2` package
// Initial main.rs
fn main() {
    println!("Hello, world!");
}
$cargo run
   Compiling day2 v0.1.0 (/my-tour-of-rust/day2)
    Finished dev [unoptimized + debuginfo] target(s) in 5.81s
     Running `target/debug/day2`
Hello, world!

Step 2 – Some Simple Project Euler

I chose one of the early problems. I picked Largest prime factor. This is a pretty easy problem, so it should be simple enough to demonstrate basic functionality.

The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?

Project Euler Problem 3

Step 2.1 – Simple Solution

To start I wrote a simple, unoptimized solution to the problem that works for 13195 in the example that is provided. However the left the program running overnight and it did not find a solution for 600851475143.

I again want to reiterate that this is not a code tutorial, but just my observations while learning some of the language. Here I am just writing the functions.

One interesting thing is the implicit return from block when a semicolon is omitted in a statement. I also like the arrow notation for the return value of the function.

A lot of this code looks similar to Java for C++ in regards to the if and for loop structure, the main different is using the range 2..=(number/2). This is similar to Python and several other languages.

fn is_prime(number : i64) -> bool {
    for candidate in 2..=(number/2) {
        if number % candidate == 0 {
            return false;
        }
    }
    true
}

fn largest_prime_factor(number : i64) -> i64 {
    let mut prime = 1;
    for factor in 1..number {
        if number % factor == 0 && is_prime(factor){
            prime = factor
        }
    }
    prime
}

fn main() {
    let number = 13195;
    // Largest prime factor of 13195: 29
    println!("Largest prime factor of {}: {}", number, largest_prime_factor(number));

    let number : i64 = 60085143;
    // Largest prime factor of 60085143: 13597
    println!("Largest prime factor of {}: {}", number, largest_prime_factor(number));

    let number : i64 = 600851475143;
    // Largest prime factor of 600851475143: 6857
    println!("Largest prime factor of {}: {}", number, largest_prime_factor(number));
}

Step 2.2 – Some Optimization

Ok, let’s add some optimizations. I was hoping that there may be some default tooling for profiling as part of the standard Rust install, but there was not. So instead I just visually analyzed the code.

The first is to the is_prime function, since it is called a lot. I also noted that I was adding an initial optimization that divided the number by 2 and started a 2 instead of one. I decided that optimization could be generalized for the up limit of numbers that needed to be checked.

The second one is to start with the larger factor found while checking for the largest prime factor and stop once one is found. The original solution, looped through every single possibility and kept the largest one.

With the solution below the whole thing ran in around 6 seconds on my old MacBook Pro, so I will call that victory and continue .

pub fn is_prime(number : i64) -> bool {
    let mut candidate : i64 = 2;
    let mut limit = number / 2;

    while candidate <= limit {
        if number % candidate == 0 {
            return false;
        }
        limit = number / candidate;
        candidate += 1;
    }
    true
}

pub fn largest_prime_factor(number : i64) -> i64 {
    let mut prime = 1;
    for factor in 2..number {
        if number % factor == 0 && is_prime(number / factor) {
            prime = number / factor;
            break;
        }
    }
    prime
}

Step 2.3 – Adding Documentation

Now that our code is working, we should add some documentation. This is done with the three slashes. Within the three slashes, you can add formatted documentation using Markdown.

/// Return true if the number is prime 
///
/// # Arguments
/// 
/// * `number` - The number to check
pub fn is_prime(number : i64) -> bool {
    let mut candidate : i64 = 2;
    let mut limit = number / 2;

    while candidate <= limit {
        if number % candidate == 0 {
            return false;
        }
        limit = number / candidate;
        candidate += 1;
    }
    true
}

/// Return the largest factor for the number that is also prime.
///
/// # Arguments
/// 
/// * `number` - The number to check
pub fn largest_prime_factor(number : i64) -> i64 {
    let mut prime = 1;
    for factor in 2..number {
        if number % factor == 0 && is_prime(number / factor) {
            prime = number / factor;
            break;
        }
    }
    prime
}

Once your documentation is finished, you can build the documentation using rustdoc or cargo. See the command below.

Note: The first time that ran this it did not give any output due to the fact that none of my functions were declared public. However, once I changed that I got the expected documentation.

$cargo doc --open
 Documenting day2 v0.1.0 (/my-tour-of-rust/day2)
    Finished dev [unoptimized + debuginfo] target(s) in 2.70s
     Opening /rushtonality/rust/my-tour-of-rust/day2/target/doc/day2/index.html

Step 2.4 – Adding Testing

To be a good software developer, you should be writing unit tests for your code, so let’s explore what Rust offers. There is build in support for testing using the attributes you see below. Here we create a tests module and import all of the functions from the parent namespace. Then you create tests with the test attribute.

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_prime_1() {
        assert_eq!(true, is_prime(1));
    }
    #[test]
    fn test_prime_2() {
        assert_eq!(true, is_prime(2));
    }
    #[test]
    fn test_prime_3() {
        assert_eq!(true, is_prime(3));
    }
    #[test]
    fn test_prime_4() {
        assert_eq!(false, is_prime(4));
    }
    #[test]
    fn test_prime_5() {
        assert_eq!(true, is_prime(5));
    }
    #[test]
    fn test_prime_large() {
        assert_eq!(true, is_prime(67_280_421_310_721));
    }
    
    #[test]
    fn test_largest_prime_factor_13195() {
        let number = 13195;
        let answer = 29;
        assert_eq!(answer, largest_prime_factor(number));
    }
    #[test]
    fn test_largest_prime_factor_60085143() {
        let number = 60085143;
        let answer = 13597;
        assert_eq!(answer, largest_prime_factor(number));
    }
    #[test]
    fn test_largest_prime_factor_600851475143() {
        let number = 600851475143;
        let answer = 6857;
        assert_eq!(answer, largest_prime_factor(number));
    }
}

Running the tests produces the results below and everything seems to be looking good.

$cargo test
   Compiling day2 v0.1.0 (/my-tour-of-rust/day2)
    Finished dev [unoptimized + debuginfo] target(s) in 1.38s
     Running target/debug/deps/day2-ae8bb176d077eb46

running 9 tests
test tests::test_largest_prime_factor_13195 ... ok
test tests::test_prime_1 ... ok
test tests::test_prime_2 ... ok
test tests::test_prime_3 ... ok
test tests::test_prime_4 ... ok
test tests::test_prime_5 ... ok
test tests::test_largest_prime_factor_60085143 ... ok
test tests::test_prime_large ... ok
test tests::test_largest_prime_factor_600851475143 ... ok

test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

Step 3 – Factorial and Recursion

When talking about functions, one of the things you learn early is usually the idea of recursion. Usually to do this you start with either factorial sequence or the fibonacci sequence. I will explore that here, and I will also dive into Tail Recursion optimizations otherwise know as Tail Call Optimization.

Step 3.1 – Simple Solution

/// Return value in the Factorial Sequence for the given position.
/// 1, 1, 2, 6, 24, 120, 720, ...
///
/// # Arguments
/// 
/// * `number` - The position to evaluate
pub fn factorial_ver1(number : i64) -> i64 {
    if number == 0 {
        return 1;
    }
    return number * factorial_ver1(number-1);
}

Step 3.2 – Let’s try to get a stack overflow

    #[test]
    fn test_factorial_ver1_large() {
        let number = i64::max_value();
        let answer = -1;  // Expecting Stack Overflow
        let result = std::panic::catch_unwind(|| factorial_ver1(number));
        assert!(result.is_err()); 
    }

We have gotten it to fail. This is the what we need to test if tail recursion fixes this issue.

Note: When running this on my desktop that has a lot more memory than this laptop, the following test fails, but with integer overflow. I had to add some extra junk, i.e. a big array of integers, to make the stack memory footprint of each call bigger to cause this. I added “let a: [i64; 10000];” to the factorial_ver1 function.

...
thread 'tests::test_factorial_ver1_large' has overflowed its stack
fatal runtime error: stack overflow
error: process didn't exit successfully: `/my-tour-of-rust/day2/target/debug/deps/day2-ae8bb176d077eb46` (signal: 6, SIGABRT: process abort signal)

Step 3.3 – Let’s see if Tail Recursion fixes it

Here is what Wikipedia has to say about Tail Recursion or Tail Call. I am not going to go into a big discussion of this, but basically the idea is that if the last instruction of a function is the recursive call, then the complier or interpreter and optimize by removing the previous function stack and replacing it with the new one.

I first learned about this many years ago when my sophomore level LISP professor Kurt Eiselt introduced it as a response to stability of systems using recursion. Basically saying that it was a safer way to do recursion so that you did not need to worry about stack overflows. That stuck with me, and I relay it to you.

/// Return value in the Factorial Sequence for the given position.
/// 1, 1, 2, 6, 24, 120, 720, ...
///
/// # Arguments
/// 
/// * `number` - The position to evaluate
pub fn factorial_ver2(number : i64) -> i64 {
    factorial_ver2_internal(number, 1)
}

fn factorial_ver2_internal(number : i64, acc : i64) -> i64 {
    if number == 0 {
        return acc;
    }
    factorial_ver2_internal(number - 1, acc * number)
}
...
mod tests {
    ...
    #[test]
    fn test_factorial_ver2_large() {
        let number = i64::max_value();
        let result = std::panic::catch_unwind(|| factorial_ver2(number));
        assert!(result.is_err()); 
    }
    ...
}

Based on this experiment it does look like Tail Call Optimization is working in Rust. Stack Overflow can not be catch/trapped, but integer overflow can be. That is what is happening in the above test that passes. I will go into error handling in more detail in a later port. Since it appears to work, I decided to take a deeper look.

Step 3.4 – Rabbit Hole of Tail Call Optimization in Rust

So I did decided to search for “Rust Tail Recursion”. That brought me here and then here. Then I found the this. After following this around for a while, I was confused. Was the ever implemented? If it was, it appears that some scenarios are still not, because there is still fairly recent talk about this. So, I do not know of the state of this in Rust, but my experiment above seems to show that there is some optimization for this. I you know and can give me a brief explanation, please send me a note on twitter @rushtonality.

The one thing that I did learn, was the idea of Trampolines and how Tail Call Optimization can be implemented or tricks to handle most/some scenarios. I found this page and then this fairly old post on it also. It was an interesting rabbit hole that I did not expect to go down.

Conclusions

That took a little longer to write up than I expected. I would love to hear any feedback, either good or bad regarding the series or this post. Please hit me up on twitter @rushtonality and follow me on twitter if you would like what you see, Thank you.

My Tour of Rust – Day 1 – Environment Setup

The first step to learn something new is to download and install. I personally work with multiple operating systems, including OSX, Windows, and Linux. I will note the differences for the different environments.

OSX

There are packages for homebrew and macports, but since rust is still a fairly fast moving project I decided not to use these and instead use rustup.

Linux

Again, most of the Linux distributions have a rust package available and for production environments where stability is key. Again for the purposes of my learning here, I have decided to use rustup.

Windows

The standard rustup setup is a shell script, so that will not work for windows. However looking at the Cargo installation page there is a link for a windows installer rustup-init.exe. This installer basically works the same as the rustup script. It installs in your home directory under .cargo.

rustup

The standard way to install is using the rustup url below and piping it though the shell.

curl https://sh.rustup.rs -sSf | sh

Now that we have installed rust, what does that mean, where is it installed, and what go installed. The installation is by default your home directory in the .cargo directory. Several developer tools are installed, but the ones that I will focus on are rustup and cargo.

.cargo
.cargo/bin
.cargo/bin/rls
.cargo/bin/rustdoc
.cargo/bin/cargo
.cargo/bin/rust-lldb
.cargo/bin/cargo-clippy
.cargo/bin/rustc
.cargo/bin/cargo-fmt
.cargo/bin/rustup
.cargo/bin/rustfmt
.cargo/bin/rust-gdb
.cargo/env

The rustup command is the one stop shop for your local rust environment. You can switch to a different channel to get beta or nightly updates. You can update current channels to the latest. You can add cross compile targets. Basically anything related the development tools can be managed through this command.

$rustup
rustup 1.16.0 (beab5ac2b 2018-12-06)
The Rust toolchain installer

USAGE:
    rustup [FLAGS] <SUBCOMMAND>

FLAGS:
    -v, --verbose    Enable verbose output
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    show           Show the active and installed toolchains
    update         Update Rust toolchains and rustup
    default        Set the default toolchain
    toolchain      Modify or query the installed toolchains
    target         Modify a toolchain's supported targets
    component      Modify a toolchain's installed components
    override       Modify directory toolchain overrides
    run            Run a command with an environment configured for a given toolchain
    which          Display which binary will be run for a given command
    doc            Open the documentation for the current toolchain
    man            View the man page for a given command
    self           Modify the rustup installation
    set            Alter rustup settings
    completions    Generate completion scripts for your shell
    help           Prints this message or the help of the given subcommand(s)

DISCUSSION:
    rustup installs The Rust Programming Language from the official
    release channels, enabling you to easily switch between stable,
    beta, and nightly compilers and keep them updated. It makes
    cross-compiling simpler with binary builds of the standard library
    for common platforms.

    If you are new to Rust consider running `rustup doc --book` to
    learn Rust.

Examples

# List the toolchain versions available
rustup toolchain list

# Install the beta channel
rustup toolchain install beta

# Install the nightly channel
rustup toolchain install beta

# Make the nightly channel the default
rustup default nightly

# Make sure that rustup has been updated
rustup self update

# Make sure all of the channels have been update to latest
rustup update

Cargo

Cargo is your build and dependency management tool. Think of it like npm or maven. There are repositories of “crates”, that are thirdparty libraries.

$cargo init --help
cargo-init 
Create a new cargo package in an existing directory

USAGE:
    cargo init [OPTIONS] [--] [path]

OPTIONS:
        --registry <REGISTRY>    Registry to use
        --vcs <VCS>              Initialize a new repository for the given version control system (git, hg, pijul, or
                                 fossil) or do not initialize any version control at all (none), overriding a global
                                 configuration. [possible values: git, hg, pijul, fossil, none]
        --bin                    Use a binary (application) template [default]
        --lib                    Use a library template
        --edition <YEAR>         Edition to set for the crate generated [possible values: 2015, 2018]
        --name <NAME>            Set the resulting package name, defaults to the directory name
    -v, --verbose                Use verbose output (-vv very verbose/build.rs output)
    -q, --quiet                  No output printed to stdout
        --color <WHEN>           Coloring: auto, always, never
        --frozen                 Require Cargo.lock and cache are up to date
        --locked                 Require Cargo.lock is up to date
    -Z <FLAG>...                 Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details
    -h, --help                   Prints help information

ARGS:
    <path>     [default: .]

Examples

# Create an empty binary project in the current directory
cargo init

# Create an empty library project in the current directory
cargo init --lib

# Create an empty library in a new my-library directory
cargo init --lib my-library

Conclusions

The rust development environment is easy to configure and update. Cargo makes dependency management straight forward and makes the language very approachable if you have experience with npm or another package manager. In the next post, I will start actually coding and will discuss cargo in more detail. If you have any questions or corrections, please send them to @rushtonality.

My Tour of Rust – Introduction

This is a series of posts about my brief tour through the Rust programming language. I plan for this series to be between 5 to 10 entries describing my initial thoughts about the language and ecosystem. It is not intend to be a thorough review or critique of the language. It is also not meant to be a tutorial of the language. There are already many good resources out there, including: The Book – Rust programming language.

This is the first set of blogs that I am writing and you maybe asking, why start with Rust? During AWS re:Invent 2018, Amazon announced the open sourcing of Firecracker, a microVM that is also the foundation of AWS Lambda. I decided to go take a look, and what I found surprised me. Digging through the source code on github I saw that the code was written in Rust.

With so much of the container world being built on Go, it surprised me that a relatively little used language, in my mind, like Rust would be used on such an important part of AWS, but then I realized that Google created Go, and that likely was part the reason, in addition to rationale in the announcement. So, with Amazon choosing to put some weight behind Rust, I decided I should finally take a deeper look.

I have know about Rust for years, but I basically put it in the same bucket as D and later Go. It looked like a successor to C and C++ as a systems programming language and I was not convinced that any of these would gain a ton of traction, although Go in the container world is proving the error of my thinking. I used to love C and its simplicity compared with C++, even with the verbosity of doing anything substantial, but hey I used to like assembly too.

This series of posts documents my initial journey into Rust and my initial thoughts on it. If you are new to Rust, hopefully it may inspire you to take a look at the language. If you are a Rustean, please have patience with me and send me corrections for anything that I get wrong to @rushtonality.