Exploring Ruby’s Deep Learning Libraries: Building Neural Networks
In the vast landscape of programming languages for deep learning, Ruby might not be the first choice that comes to mind. However, with its user-friendly syntax and a growing ecosystem of libraries, Ruby is a viable option for building neural networks. In this blog, we will embark on a journey to explore Ruby’s deep learning capabilities and guide you through the process of building neural networks from scratch.
Table of Contents
1. Why Ruby for Deep Learning?
Before we dive into the practical aspects, let’s address the question: Why Ruby for deep learning? While languages like Python are more commonly associated with deep learning due to their extensive libraries and frameworks, Ruby offers several advantages worth considering:
1.1. Expressive Syntax
Ruby’s clean and expressive syntax makes it easy to read and write code. This can be a significant advantage when developing complex neural network architectures, as code readability is crucial for collaboration and maintenance.
1.2. Versatility
Ruby is a versatile language with a rich ecosystem of gems (libraries) that can be leveraged for deep learning. While it may not have as many dedicated deep learning frameworks as Python, Ruby’s flexibility allows you to integrate with other libraries and tools seamlessly.
1.3. Familiarity
For developers already familiar with Ruby, there’s a natural inclination to explore its capabilities in the realm of deep learning. Using a language you’re comfortable with can accelerate the learning process.
Now that we’ve established the rationale behind using Ruby, let’s delve into the practical aspects.
2. Setting up the Environment
Before we start building neural networks, it’s essential to set up a development environment. We’ll rely on two key Ruby gems: Numo and Daru.
2.1. Installing Numo
Numo is a numerical computing library for Ruby that provides a NumPy-like interface for array operations. To install Numo, you can use the following command:
ruby gem install numo-narray
2.2. Installing Daru
Daru is a data manipulation and analysis library for Ruby, similar to Pandas in Python. It’s helpful for preprocessing and handling datasets. Install Daru with:
ruby gem install daru
With Numo and Daru in place, we’re ready to begin our journey into deep learning.
3. Building a Neural Network from Scratch
In this section, we’ll build a simple feedforward neural network from scratch using Numo for array operations and Daru for data manipulation. Our goal is to create a basic neural network that can perform binary classification.
3.1. Data Preparation
Let’s start by preparing our dataset. For this example, we’ll generate synthetic data. In practice, you would replace this with your own dataset.
ruby require 'daru' require 'numo/narray' # Generate synthetic data num_samples = 100 input_data = Numo::NArray.rand(num_samples, 2) labels = input_data.sum(1) > 1.0 ? 1 : 0 # Create a Daru DataFrame df = Daru::DataFrame.new({ feature1: input_data[0, true], feature2: input_data[1, true], label: labels }) # Split the data into training and testing sets train_data, test_data = df.row[0..79], df.row[80..99]
In this code, we’ve generated random data with two features and assigned labels based on a simple condition. We’ve then created a Daru DataFrame and split it into training and testing sets.
3.2. Model Architecture
Now, let’s define our neural network architecture. Our network will have an input layer, one hidden layer, and an output layer. We’ll use sigmoid activation for the hidden layer and a sigmoid output for binary classification.
ruby class NeuralNetwork def initialize(input_size, hidden_size) @input_size = input_size @hidden_size = hidden_size @output_size = 1 # Initialize weights and biases @weights_input_hidden = Numo::DFloat.new(@input_size, @hidden_size).rand @biases_hidden = Numo::DFloat.new(@hidden_size).zeros @weights_hidden_output = Numo::DFloat.new(@hidden_size, @output_size).rand @biases_output = Numo::DFloat.new(@output_size).zeros end def sigmoid(x) 1 / (1 + Numo::NMath.exp(-x)) end def forward(input) # Input to hidden layer @hidden_input = input.dot(@weights_input_hidden) + @biases_hidden @hidden_output = sigmoid(@hidden_input) # Hidden to output layer @output = @hidden_output.dot(@weights_hidden_output) + @biases_output @output end end
Here, we’ve defined the NeuralNetwork class with methods for initializing the network’s architecture, applying the sigmoid activation function, and performing the forward pass.
3.3. Training the Neural Network
To train our neural network, we’ll use the backpropagation algorithm. We’ll define a loss function (mean squared error) and update the weights and biases using gradient descent.
ruby class NeuralNetwork # ... (previous code) def mean_squared_error(predictions, targets) ((predictions - targets)**2).mean end def backward(input, target, learning_rate) # Compute loss loss = mean_squared_error(@output, target) # Backpropagation output_delta = 2 * (predictions - target) hidden_delta = output_delta.dot(@weights_hidden_output.transpose) * @hidden_output * (1 - @hidden_output) # Update weights and biases @weights_hidden_output -= @hidden_output.transpose.dot(output_delta) * learning_rate @biases_output -= output_delta.sum * learning_rate @weights_input_hidden -= input.transpose.dot(hidden_delta) * learning_rate @biases_hidden -= hidden_delta.sum * learning_rate end end
In this code, we’ve added methods for computing the mean squared error loss, performing backpropagation, and updating the weights and biases.
3.4. Training Loop
Now, let’s put it all together in a training loop.
ruby # Initialize the neural network input_size = 2 hidden_size = 4 learning_rate = 0.01 epochs = 1000 nn = NeuralNetwork.new(input_size, hidden_size) # Training loop epochs.times do |epoch| train_data.each do |row| input = row[:feature1..:feature2].to_a target = Numo::NArray[row[:label]] # Forward pass predictions = nn.forward(input) # Backpropagation and update nn.backward(input, target, learning_rate) end # Calculate and print the loss for this epoch train_loss = nn.mean_squared_error(nn.forward(train_data[:feature1..:feature2].to_a), train_data[:label].to_a) puts "Epoch #{epoch + 1}/#{epochs}, Loss: #{train_loss}" end
In this training loop, we iterate through the dataset, compute predictions, perform backpropagation, and update the weights and biases for a specified number of epochs.
3.5. Testing the Model
Finally, let’s evaluate our trained neural network on the test data.
ruby # Testing the model test_loss = nn.mean_squared_error(nn.forward(test_data[:feature1..:feature2].to_a), test_data[:label].to_a) puts "Test Loss: #{test_loss}"
This code calculates the mean squared error loss on the test data, providing an indication of the model’s performance.
Conclusion
In this blog, we’ve embarked on a journey to explore Ruby’s deep learning capabilities by building a neural network from scratch. While Ruby may not be as popular as Python for deep learning, its clean syntax, versatility, and a growing ecosystem of libraries make it a viable choice for certain projects.
As you delve deeper into the world of deep learning with Ruby, you’ll find that there are more libraries and tools at your disposal, such as the Numo-NumPy bridge for seamless integration with Python libraries. With dedication and creativity, you can harness the power of Ruby to develop robust neural networks and tackle a wide range of machine learning tasks.
Whether you’re a seasoned Rubyist or simply curious about expanding your programming horizons, exploring deep learning with Ruby can be a rewarding endeavor. So, go ahead, experiment, and build neural networks that pave the way for exciting AI applications in the Ruby ecosystem. Happy coding!
Table of Contents