Chapter 8: Create ML Project Interface and Model Training
Create ML project offers a comprehensive workspace with various tools that can streamline the model training process. Each section of the interface provides a set of tools, so let’s take some time to explore them.
Workspace
On the left-hand side, you’ll find all project-related details, such as the project name, training source, and any data sources.
In the middle, you’ll find your workspace, where you can add your training data, validation data, and test data. You can also choose options like the base model you want to use for transfer learning, the number of iterations, and any augmentation configuration to make training more comprehensive.
On the right side, you’ll find the activity panel, where you can view the training accuracy and a log of the activities that took place in the project.
In the middle workspace, we can find several key options, including Training, Evaluation, Preview, and Output. However, these options remain inactive until the model training process is successfully completed. Once training is finished, we can use these features to analyze the model’s performance, test it with sample data, and review its predictions before deploying it for real-world apps. This structured workflow ensures that we refine and validate the model effectively before integrating it into our app projects.
Folder Hierarchy
Now, it’s time to add training data to our project. To do this, simply drag and drop the “images” folder onto the “Training Data” section within the workspace. Make sure that all your sub folders are properly organized within a single parent folder, following the structure shown below. This setup ensures that Create ML correctly interprets the data and categorizes images appropriately for training.
Credit Apple Create ML
Data Section
After adding the folder containing all the training data, Create ML will automatically analyze its contents. The tool will then display key details, including the total number of classes it has identified and the total number of images within the dataset. This step helps ensure that our data is correctly structured before proceeding with model training.
Notice that additional sections, such as Validation Data and Testing Data, are available. At this stage, we have the option to either provide our own validation dataset or use the default Auto setting. If we choose Auto, Create ML will automatically split a portion of the training data to be used for validation. This ensures that the model is evaluated on unseen data during training, improving its overall performance and reliability.
Parameters Section
In the Parameters section, we will choose the “Image Feature Print V2” model for feature extraction (transfer learning). This model helps leverage pre-trained features to improve the efficiency of training.
For the remaining options, let’s keep everything at their default settings. This includes setting the iterations to 25 and leaving all augmentation options unchecked. These settings are typically a good starting point for training, but you can adjust them later if needed based on performance results.
Model Training
We’re now ready to begin training our model. To start, simply click on the “Train” button located at the top left corner of the workspace. This will initiate the training process, and Create ML will begin processing the data and fine-tuning the model based on the settings we have configured.
Once the training process is complete, we will see the training results displayed at the top of the activity panel. These results provide insights into how well the model performed during training, including metrics like accuracy, loss, and other relevant performance indicators. This allows us to assess the effectiveness of the model before moving on to the next steps.
Evaluation Tab
Next, open the Evaluation tab and add the testing data. In this case, I have collected images for each of the nine fruit classes and organized them into corresponding folders. These folders are then placed into a single test folder, which I will upload here for evaluation. This step allows the model to be tested on new, unseen data to gauge its performance and accuracy.
After adding the test data, click on the Test button to evaluate the model’s performance on images it hasn’t encountered before. This step helps assess how well the model generalizes to new, unseen data and provides insights into its real-world accuracy.
By selecting the blue numbers in the Metrics table, we will be taken to the Explore tab. Here, we can view a detailed breakdown of all the accurate and inaccurate predictions made by the model. This helps us analyze its strengths and weaknesses, allowing us to fine-tune the model further if needed.
Preview & Predict
We can also preview the model’s predictions by dragging and dropping an image into the Preview tab. This allows us to quickly see how the model predicts the class of the image and assess its performance on individual samples.
Model Export
We can see that our model is performing quite well. Now, to integrate it into an Apple app, let’s export it in the CoreMLformat. To do this, go to the Output tab and click the “Get” button. This will allow us to save the model and prepare it for use within our apps.
Once the model is exported, we can simply double-click on it to open it within Xcode. This will allow us to explore the model in more detail, such as inspecting its layers, weights, and other parameters, as well as integrating it into our app for further development.
Conclusion
In this chapter, we trained our own model from scratch using a custom dataset and the Create ML platform. This process involved preparing the data, configuring the model, and evaluating its performance to ensure it was ready for use in an app.