• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Thursday, January 22, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

Datumbox Machine Learning Framework version 0.8.0 released

Josh by Josh
June 11, 2025
in Al, Analytics and Automation
0
Datumbox Machine Learning Framework version 0.8.0 released
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter

READ ALSO

FlashLabs Researchers Release Chroma 1.0: A 4B Real Time Speech Dialogue Model With Personalized Voice Cloning

Salesforce AI Introduces FOFPred: A Language-Driven Future Optical Flow Prediction Framework that Enables Improved Robot Control and Video Generation


  • January 15, 2017
  • Vasilis Vryniotis
  • . 1 Comment

Datumbox Framework v0.8.0 is out and packs several powerful features! This version brings new Preprocessing, Feature Selection and Model Selection algorithms, new powerful Storage Engines that give better control on how the Models and the Dataframes are saved/loaded, several pre-trained Machine Learning models and lots of memory & speed improvements. Download it now from Github or Maven Central Repository.

One of the main targets of version 0.8.0 was to improve the Storage mechanisms of the framework and make disk-based training available to all the supported algorithms. The new storage engines give better control over how and when the models are being persisted. One important change is that the models are not being stored automatically after the fit() method is finished but instead one needs to explicitly call the save() method providing the name of the model. This enables us not only to discard easier temporary algorithms without going through a serialization phase but also to save/load the Dataframes:


Configuration configuration = Configuration.getConfiguration();
Dataframe data = ...; //load a dataframe here

MaximumEntropy.TrainingParameters params = new MaximumEntropy.TrainingParameters();
MaximumEntropy model = MLBuilder.create(params, getConfiguration());
model.fit(data);
model.save("MyModel"); //save the model using the specific name
model.close();

data.save("MyData"); //save the data using a specific name
data.close();

data = Dataframe.Builder.load("MyData", configuration); //load the data
model = MLBuilder.load(MaximumEntropy.class, "MyModel", configuration); //load the model
model.predict(data);
model.delete(); //delete the model

Currently we support two storage engines: The InMemory engine which is very fast as it loads everything in memory and the MapDB engine which is slower but permits disk-based training. You can control which engine you use by changing your datumbox.configuration.properties or you can programmatically modify the configuration objects. Each engine has its own configuration file but again you can modify everything programmatically:


Configuration configuration = Configuration.getConfiguration(); //conf from properties file

configuration.setStorageConfiguration(new InMemoryConfiguration()); //use In-Memory engine
//configuration.setStorageConfiguration(new MapDBConfiguration()); //use MapDB engine

Please note that in both engines, there is a directory setting which controls where the models are being stored (inMemoryConfiguration.directory and mapDBConfiguration.directory properties in config files). Make sure you change them or else the models will be written on the temporary folder of your system. For more information on how you structure the configuration files have a look on the Code Example project.

With the new Storage mechanism in place, it is now feasible to share publicly pre-trained models that cover the areas of Sentiment Analysis, Spam Detection, Language Detection, Topic Classification and all the other models that are available via the Datumbox API. You can now download and use all the pre-trained models on your project without requiring calling the API and without being limited by the number of daily calls. Currently the published models are trained using the InMemory storage engine and they support only English. On future releases, I plan to provide support for more languages.

In the new framework, there are several changes on the public methods of many of the classes (hence it is not backwards compatible). The most notable difference is on the way the models are initialized. As we saw in the earlier code example, the models are not directly instantiated but instead the MLBuilder class is used to either create or load a model. The training parameters are provided directly to the builder and they can’t be changed with a setter.

Another improvement is on the way we perform Model Selection. The v0.8.0 introduces the new modelselection package which offers all the necessary tools for validating and measuring the performance of our models. In the metrics subpackage we provide the most important validation metrics for classification, clustering, regression and recommendation. Note that the ValidationMetrics are removed from each individual algorithm and they are no longer stored together with the model. The framework offers the new splitters subpackage which enables splitting the original dataset using different schemes. Currently K-fold splits are performed using the KFoldSplitter class while partitioning the dataset into a training and test set can be achieved with the ShuffleSplitter. Finally to quickly validate a model, the framework offers the Validator class. Here is how one can perform K-fold cross validation within a couple of lines of code:


ClassificationMetrics vm = new Validator<>(ClassificationMetrics.class, configuration)
    .validate(new KFoldSplitter(k).split(data), new MaximumEntropy.TrainingParameters());

The new Preprocessing package replaces the old Data Transformers and gives better control on how we scale and encode the data before the machine learning algorithms. The following algorithms are supported for scaling numerical variables: MinMaxScaler, StandardScaler, MaxAbsScaler and BinaryScaler. For encoding categorical variables into booleans you can use the following methods: OneHotEncoder and CornerConstraintsEncoder. Here is how you can use the new algorithms:


StandardScaler numericalScaler = MLBuilder.create(
    new StandardScaler.TrainingParameters(), 
    configuration
);
numericalScaler.fit_transform(trainingData);

CornerConstraintsEncoder categoricalEncoder = MLBuilder.create(
    new CornerConstraintsEncoder.TrainingParameters(), 
    configuration
);
categoricalEncoder.fit_transform(trainingData);

Another important update is the fact that the Feature Selection package was rewritten. Currently all feature selection algorithms focus on specific datatypes, making it possible to chain different methods together. As a result the TextClassifier and the Modeler classes receive a list of feature selector parameters rather than just one.

As mentioned earlier all the algorithms now support disk-based training, including those that use Matrices (only exception is the Support Vector Machines). The new storage engine mechanism even makes it possible to configure some algorithms or dataframes to be stored in memory while others on disk. Several speed improvements were introduced primarily due to the new storage engine mechanism but also due to the tuning of individual algorithms such as the ones in the DPMM family.

Last but not least the new version updates all the dependencies to their latest versions and removes some of them such as the the commons-lang and lp_solve. The commons-lang, which was used for HTML parsing, is replaced with a faster custom HTMLParser implementation. The lp_solve is replaced with a pure Java simplex solver which means that Datumbox no longer requires specific system libraries installed on the operating system. Moreover lp_solve had to go because it uses LGPLv2 which is not compatible with the Apache 2.0 license.

The version 0.8.0 brings several more new features and improvements on the framework. For a detailed view of the changes please check the Changelog.

 

Don’t forget to clone the code of Datumbox Framework v0.8.0 from Github, check out the Code Examples and download the pre-trained Machine Learning models from Datumbox Zoo. I am looking forward to your comments and suggestions.



Source_link

Related Posts

FlashLabs Researchers Release Chroma 1.0: A 4B Real Time Speech Dialogue Model With Personalized Voice Cloning
Al, Analytics and Automation

FlashLabs Researchers Release Chroma 1.0: A 4B Real Time Speech Dialogue Model With Personalized Voice Cloning

January 22, 2026
Al, Analytics and Automation

Salesforce AI Introduces FOFPred: A Language-Driven Future Optical Flow Prediction Framework that Enables Improved Robot Control and Video Generation

January 21, 2026
Why it’s critical to move beyond overly aggregated machine-learning metrics | MIT News
Al, Analytics and Automation

Why it’s critical to move beyond overly aggregated machine-learning metrics | MIT News

January 21, 2026
What are Context Graphs? – MarkTechPost
Al, Analytics and Automation

What are Context Graphs? – MarkTechPost

January 21, 2026
IVO’s $55M Boost Signals AI-Driven Law Future (and It’s Just Getting Started)
Al, Analytics and Automation

IVO’s $55M Boost Signals AI-Driven Law Future (and It’s Just Getting Started)

January 20, 2026
How to Design a Fully Streaming Voice Agent with End-to-End Latency Budgets, Incremental ASR, LLM Streaming, and Real-Time TTS
Al, Analytics and Automation

How to Design a Fully Streaming Voice Agent with End-to-End Latency Budgets, Incremental ASR, LLM Streaming, and Real-Time TTS

January 20, 2026
Next Post
Why Trust Matters in Marketing Today

Why Trust Matters in Marketing Today

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Google announced the next step in its nuclear energy plans 

Google announced the next step in its nuclear energy plans 

August 20, 2025

EDITOR'S PICK

Why Competitors Show Up in AI Search & You Don‘t

Why Competitors Show Up in AI Search & You Don‘t

January 7, 2026

Stanford Researchers Build SleepFM Clinical: A Multimodal Sleep Foundation AI Model for 130+ Disease Prediction

January 8, 2026
A Practical Guide to Handling Out-of-Memory Data in Python

A Practical Guide to Handling Out-of-Memory Data in Python

September 2, 2025
Nintendo sold 5.82 million Switch 2s in 7 weeks but still can’t keep up with demand

Nintendo sold 5.82 million Switch 2s in 7 weeks but still can’t keep up with demand

August 1, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • How Corporate Storytelling Strengthens Brand Identity in 2026
  • Spin a Baddie Script (No Key, Auto Roll, Auto Equip)
  • Why LinkedIn says prompting was a non-starter — and small models was the breakthrough
  • 5 B2B Marketing Trends for 2026
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?