• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Saturday, August 23, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

Datumbox Machine Learning Framework version 0.8.0 released

Josh by Josh
June 11, 2025
in Al, Analytics and Automation
0
Datumbox Machine Learning Framework version 0.8.0 released
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Google AI Proposes Novel Machine Learning Algorithms for Differentially Private Partition Selection

Seeing Images Through the Eyes of Decision Trees


  • January 15, 2017
  • Vasilis Vryniotis
  • . 1 Comment

Datumbox Framework v0.8.0 is out and packs several powerful features! This version brings new Preprocessing, Feature Selection and Model Selection algorithms, new powerful Storage Engines that give better control on how the Models and the Dataframes are saved/loaded, several pre-trained Machine Learning models and lots of memory & speed improvements. Download it now from Github or Maven Central Repository.

One of the main targets of version 0.8.0 was to improve the Storage mechanisms of the framework and make disk-based training available to all the supported algorithms. The new storage engines give better control over how and when the models are being persisted. One important change is that the models are not being stored automatically after the fit() method is finished but instead one needs to explicitly call the save() method providing the name of the model. This enables us not only to discard easier temporary algorithms without going through a serialization phase but also to save/load the Dataframes:


Configuration configuration = Configuration.getConfiguration();
Dataframe data = ...; //load a dataframe here

MaximumEntropy.TrainingParameters params = new MaximumEntropy.TrainingParameters();
MaximumEntropy model = MLBuilder.create(params, getConfiguration());
model.fit(data);
model.save("MyModel"); //save the model using the specific name
model.close();

data.save("MyData"); //save the data using a specific name
data.close();

data = Dataframe.Builder.load("MyData", configuration); //load the data
model = MLBuilder.load(MaximumEntropy.class, "MyModel", configuration); //load the model
model.predict(data);
model.delete(); //delete the model

Currently we support two storage engines: The InMemory engine which is very fast as it loads everything in memory and the MapDB engine which is slower but permits disk-based training. You can control which engine you use by changing your datumbox.configuration.properties or you can programmatically modify the configuration objects. Each engine has its own configuration file but again you can modify everything programmatically:


Configuration configuration = Configuration.getConfiguration(); //conf from properties file

configuration.setStorageConfiguration(new InMemoryConfiguration()); //use In-Memory engine
//configuration.setStorageConfiguration(new MapDBConfiguration()); //use MapDB engine

Please note that in both engines, there is a directory setting which controls where the models are being stored (inMemoryConfiguration.directory and mapDBConfiguration.directory properties in config files). Make sure you change them or else the models will be written on the temporary folder of your system. For more information on how you structure the configuration files have a look on the Code Example project.

With the new Storage mechanism in place, it is now feasible to share publicly pre-trained models that cover the areas of Sentiment Analysis, Spam Detection, Language Detection, Topic Classification and all the other models that are available via the Datumbox API. You can now download and use all the pre-trained models on your project without requiring calling the API and without being limited by the number of daily calls. Currently the published models are trained using the InMemory storage engine and they support only English. On future releases, I plan to provide support for more languages.

In the new framework, there are several changes on the public methods of many of the classes (hence it is not backwards compatible). The most notable difference is on the way the models are initialized. As we saw in the earlier code example, the models are not directly instantiated but instead the MLBuilder class is used to either create or load a model. The training parameters are provided directly to the builder and they can’t be changed with a setter.

Another improvement is on the way we perform Model Selection. The v0.8.0 introduces the new modelselection package which offers all the necessary tools for validating and measuring the performance of our models. In the metrics subpackage we provide the most important validation metrics for classification, clustering, regression and recommendation. Note that the ValidationMetrics are removed from each individual algorithm and they are no longer stored together with the model. The framework offers the new splitters subpackage which enables splitting the original dataset using different schemes. Currently K-fold splits are performed using the KFoldSplitter class while partitioning the dataset into a training and test set can be achieved with the ShuffleSplitter. Finally to quickly validate a model, the framework offers the Validator class. Here is how one can perform K-fold cross validation within a couple of lines of code:


ClassificationMetrics vm = new Validator<>(ClassificationMetrics.class, configuration)
    .validate(new KFoldSplitter(k).split(data), new MaximumEntropy.TrainingParameters());

The new Preprocessing package replaces the old Data Transformers and gives better control on how we scale and encode the data before the machine learning algorithms. The following algorithms are supported for scaling numerical variables: MinMaxScaler, StandardScaler, MaxAbsScaler and BinaryScaler. For encoding categorical variables into booleans you can use the following methods: OneHotEncoder and CornerConstraintsEncoder. Here is how you can use the new algorithms:


StandardScaler numericalScaler = MLBuilder.create(
    new StandardScaler.TrainingParameters(), 
    configuration
);
numericalScaler.fit_transform(trainingData);

CornerConstraintsEncoder categoricalEncoder = MLBuilder.create(
    new CornerConstraintsEncoder.TrainingParameters(), 
    configuration
);
categoricalEncoder.fit_transform(trainingData);

Another important update is the fact that the Feature Selection package was rewritten. Currently all feature selection algorithms focus on specific datatypes, making it possible to chain different methods together. As a result the TextClassifier and the Modeler classes receive a list of feature selector parameters rather than just one.

As mentioned earlier all the algorithms now support disk-based training, including those that use Matrices (only exception is the Support Vector Machines). The new storage engine mechanism even makes it possible to configure some algorithms or dataframes to be stored in memory while others on disk. Several speed improvements were introduced primarily due to the new storage engine mechanism but also due to the tuning of individual algorithms such as the ones in the DPMM family.

Last but not least the new version updates all the dependencies to their latest versions and removes some of them such as the the commons-lang and lp_solve. The commons-lang, which was used for HTML parsing, is replaced with a faster custom HTMLParser implementation. The lp_solve is replaced with a pure Java simplex solver which means that Datumbox no longer requires specific system libraries installed on the operating system. Moreover lp_solve had to go because it uses LGPLv2 which is not compatible with the Apache 2.0 license.

The version 0.8.0 brings several more new features and improvements on the framework. For a detailed view of the changes please check the Changelog.

 

Don’t forget to clone the code of Datumbox Framework v0.8.0 from Github, check out the Code Examples and download the pre-trained Machine Learning models from Datumbox Zoo. I am looking forward to your comments and suggestions.



Source_link

Related Posts

Google AI Proposes Novel Machine Learning Algorithms for Differentially Private Partition Selection
Al, Analytics and Automation

Google AI Proposes Novel Machine Learning Algorithms for Differentially Private Partition Selection

August 23, 2025
Seeing Images Through the Eyes of Decision Trees
Al, Analytics and Automation

Seeing Images Through the Eyes of Decision Trees

August 23, 2025
Tried an AI Text Humanizer That Passes Copyscape Checker
Al, Analytics and Automation

Tried an AI Text Humanizer That Passes Copyscape Checker

August 22, 2025
Top 10 AI Blogs and News Websites for AI Developers and Engineers in 2025
Al, Analytics and Automation

Top 10 AI Blogs and News Websites for AI Developers and Engineers in 2025

August 22, 2025
AI-Powered Content Creation Gives Your Docs and Slides New Life
Al, Analytics and Automation

AI-Powered Content Creation Gives Your Docs and Slides New Life

August 22, 2025
What Is Speaker Diarization? A 2025 Technical Guide: Top 9 Speaker Diarization Libraries and APIs in 2025
Al, Analytics and Automation

What Is Speaker Diarization? A 2025 Technical Guide: Top 9 Speaker Diarization Libraries and APIs in 2025

August 22, 2025
Next Post
Why Trust Matters in Marketing Today

Why Trust Matters in Marketing Today

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Refreshing a Legacy Brand for a Meaningful Future – Truly Deeply – Brand Strategy & Creative Agency Melbourne

Refreshing a Legacy Brand for a Meaningful Future – Truly Deeply – Brand Strategy & Creative Agency Melbourne

June 7, 2025

EDITOR'S PICK

OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

August 23, 2025
La Tortillería chips brand still smashing it – Truly Deeply – Brand Strategy & Creative Agency Melbourne

La Tortillería chips brand still smashing it – Truly Deeply – Brand Strategy & Creative Agency Melbourne

May 30, 2025
I Tested Candy AI for 30 Days: Here’s what really happened

I Tested Candy AI for 30 Days: Here’s what really happened

July 23, 2025
Best SMS Platform for Your Business

Best SMS Platform for Your Business

June 25, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Our approach to energy innovation and AI’s environmental footprint
  • Transparency, accountability, security & trust
  • Maximize Your Amazon Affiliate Income with Pinterest
  • OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?