Liferay DXP and machine learning; Liferay as an integration platform

Wouldn't it be nice to take pictures with my mobile phone and let Liferay categorize /  organise these pictures for me?

 

Imagine how powerful it would be if I am on a business trip and I take lots pictures with my phone (pictures of the places I see to show them to my friends, pictures of food so that I can show my mom that I'm eating well, and my receipts so that I can expense them later ) and when I'm back home, I go to my document repository and I search for the word "RECEIPT", and since all my receipts have been magically tagged as "receipt", they will all come up on my search result.

 

Does it sound like science fiction?

Now, look at this video because this is exactly the principle we use for this kind of problem:  https://youtu.be/ShAUafF2yfw (the video quality is not perfect but you get the idea. Right?).

 

Liferay is the perfect platform for this problem because:

  1. It comes with a document repository
  2. It can be virtually integrated with anything
  3. It is omnichannel and allows you to use your mobile phone to create documents
  4. It is highly customizable (below I'll show you how I did it)

 

Just to mention,  Liferay Sync is a document sharing functionality for your Liferay system; in this case, it is what allows me to share my mobile pictures with my Liferay server.

 

 

How did I do it?

Easy!!

You can take a look at the code here: https://github.com/roclas/liferay7ClarifaiDocumentClassifierModelListener , but just to make it even easier for you, I will summarise: what I am doing is using a model listener:

@Component(immediate = true, service = ModelListener.class)
public class ClassifyingDocumentListener extends BaseModelListener<AssetEntry>{...

Every time an AssetEntry is created (every time an asset is created), the code will check if it is a document, and if it is, it will extract the bytes of this document and send them to the external API (Clarifai: https://www.clarifai.com/ ). Clarifai will give us information about the picture in an object, and the model listener will use tags to classify my document.

As simple as that!

Please ask if you have any questions.

 

As you can see, the hardest part was to think of an interesting use case, and not about the implementation itself (and the idea was not even mine; thank you, John Feeney and Filipe Afonso; please keep sharing more cool ideas like this one).

 

 

 

What next?

Do you think it would be interesting for some people to put this in the marketplace?

Liferay is an integration platform and makes integration with virtually any library / API possible. We could almost say that: "if it runs on Java it can be put into an OSGi module and deployed into Liferay".

The possibilities for integration use cases are endless.

Do you have any similar idea? What would you like to see Liferay integrating with?