A new version of WekaDeeplearning4j, version 1.6.0, has just been released and brings with it a bunch of exciting new features.
The package previously already contained a model zoo—a set of model architectures designed by others (e.g., AlexNet, ResNet50)—but there was no easy way to use a pretrained checkpoint in Dl4jMlpClassifier so training these models had to be done from scratch. The new release of WekaDeeplearning4j both expands the model zoo to contain over 30 models and provides an easy way to initialize a model with pre-trained weights that have been made publicly available for these models. This makes it even easier to start playing with state-of-the-art neural networks (e.g., EfficientNet): not only do you not need programming experience—due to the WEKA GUI—but now you don't even need a beefy GPU to start getting useful results because these models can be used as feature extractors without any extra training, simply using the pre-trained weights.
The Dl4jMlpFilter allows you to use a model—which can now be a pretrained model from the model zoo—as a feature extractor, converting an image dataset into a numeric form that can be used with any off-the-shelf WEKA classifier. In the new version, the filter also supports multiple feature extraction layers; by default the last dense layer will be used, but you can alternatively choose any intermediary layer as well, concatenating the activations from the two layers. This opens up a huge new world of experimentation.
Image Dataset Conversion Script
Some image classification datasets come in a simple 'folder-organised' fashion, where collections of images are split into subfolders, with the name of each subfolder providing the class of all images within it. To make it easier to work with these types of image datasets, the package now includes the ImageDirectoryLoader which can load datasets of this form. It works analogously to the TextDirectoryLoader in WEKA.
Check out the documentation for more info on these new features!