Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
Image Optimization and fragments
I found a really important usecase, that is missing in the fragments: Image Optimization
The problem is quite simple. Content editors don't care about image sizes. It has become a bit better nowadays, but not long ago, a customer complained, that the image gallery does not work. I found that he had uploaded a bunch of 10 MB images (he has a pretty neat camera ...) and it was simply slow. Loading 500 MB takes a while ...
Apart for that problem, they simply can't add multiple versions of the image for desktop and mobile. We even had the requirement (a couple of times actually) that they want to be able to specify different images. But they only thought, they wanted it and never used that feature later on.
Liferay offers currently no real support here (except for Adaptive Media, which inserts really crappy, mostly useless mediaqueries).
To support that, we currently add our own code to templates and instead of simply inserting the document library link, we do something more. We also told the content editors: Just upload the best image you have.
The best solution would be to (we don't do everything either, only some of it):
1) Insert an optimized image with the correct width
2) A WebP version is added as a picture source
3) Picture sources for mobile are added
4) For background images: Mediaqueries
5) Load some images through javascript since they are not immediately visible
We can easily implement our optimization in Templates, but I don't see a way to do that with Fragments.
The problem is quite simple. Content editors don't care about image sizes. It has become a bit better nowadays, but not long ago, a customer complained, that the image gallery does not work. I found that he had uploaded a bunch of 10 MB images (he has a pretty neat camera ...) and it was simply slow. Loading 500 MB takes a while ...
Apart for that problem, they simply can't add multiple versions of the image for desktop and mobile. We even had the requirement (a couple of times actually) that they want to be able to specify different images. But they only thought, they wanted it and never used that feature later on.
Liferay offers currently no real support here (except for Adaptive Media, which inserts really crappy, mostly useless mediaqueries).
To support that, we currently add our own code to templates and instead of simply inserting the document library link, we do something more. We also told the content editors: Just upload the best image you have.
The best solution would be to (we don't do everything either, only some of it):
1) Insert an optimized image with the correct width
2) A WebP version is added as a picture source
3) Picture sources for mobile are added
4) For background images: Mediaqueries
5) Load some images through javascript since they are not immediately visible
We can easily implement our optimization in Templates, but I don't see a way to do that with Fragments.
Question:
Could I simply write my own "EditableElementParser"?
@Component(
immediate = true, property = "type=picture",
service = EditableElementParser.class )
....
I am currently not sure, if it would be better to replace "image" or add my own field (I tend to the latter). It depends on the rest of the system, if I can simply add new types or if only replacement works.
Could I simply write my own "EditableElementParser"?
@Component(
immediate = true, property = "type=picture",
service = EditableElementParser.class )
....
I am currently not sure, if it would be better to replace "image" or add my own field (I tend to the latter). It depends on the rest of the system, if I can simply add new types or if only replacement works.
Meh. I just realized that I can't write my own or even a replacement for Image since the package com.liferay.fragment.entry.processor.editable.parser is not exported.
Hi Christoph,
You're right, there is no way to implement your own EditableParser, but you can implement your own FragmentEntryProcessor and process all the images(videos, embedded objects, etc.) in your own way.
Please take a look at this blog post https://community.liferay.com/es/blogs/-/blogs/fragments-extension-fragment-entry-processors and let me know if you need any additional information for your particular case.
Hope it helps!
You're right, there is no way to implement your own EditableParser, but you can implement your own FragmentEntryProcessor and process all the images(videos, embedded objects, etc.) in your own way.
Please take a look at this blog post https://community.liferay.com/es/blogs/-/blogs/fragments-extension-fragment-entry-processors and let me know if you need any additional information for your particular case.
Hope it helps!
Thanks for the input!
1) Why don't you make the interface available? I mean, if it isn't supported (yet?) to add new types, ok, fine. But maybe it would be necessary for me to just replace one of the existing parsers. I am not sure, why, if the processor method works, but it could be useful.
2) So, I would do something like this:
<myPicture>
<lfr-editable type="image"><img ...>
</myPicture>
?
The mode parameter tells me if the page is in view or edit mode? So, I could "do nothing" in edit mode" and "fix" the url to the image when I am in view mode?
1) Why don't you make the interface available? I mean, if it isn't supported (yet?) to add new types, ok, fine. But maybe it would be necessary for me to just replace one of the existing parsers. I am not sure, why, if the processor method works, but it could be useful.
2) So, I would do something like this:
<myPicture>
<lfr-editable type="image"><img ...>
</myPicture>
?
The mode parameter tells me if the page is in view or edit mode? So, I could "do nothing" in edit mode" and "fix" the url to the image when I am in view mode?
1) The problem here is that we have specific JS part, responsible for frontend interaction with the <lfr-editable> tags and there is no way(yet) to contribute to this JS code. In this situation it doesn't make sense to allow to contribute to the backend part.
2) You can create your own tag or just process all the <img> tags on the page(as far as I understand, this is what you want to achieve). Yes, you can use mode parameter to distinguish between edit and view.
2) You can create your own tag or just process all the <img> tags on the page(as far as I understand, this is what you want to achieve). Yes, you can use mode parameter to distinguish between edit and view.
Nope, that doesn't work. When I implement such a processor, I only get the original fragment html code. The data/fields entered by the user are not in there.
<div class="p-3">
<lfr-editable id="Img00" type="image">
<img alt="" class="mw-100" src="" style="width:100vw">
</lfr-editable>
</div>
<div class="p-3">
<lfr-editable id="Img00" type="image">
<img alt="" class="mw-100" src="" style="width:100vw">
</lfr-editable>
</div>
Processors execution is serial and controlled by priority(component property), Editable processor has priority = 2, if you define lower priority(or don't define priority at all) your processor will be executed before Editable.
And if your processor priority is higher, your processor will receive html already processed by Editable, ready for your manipulations.
Try something like this for your processor:
And if your processor priority is higher, your processor will receive html already processed by Editable, ready for your manipulations.
Try something like this for your processor:
@Component(
immediate = true, property = "fragment.entry.processor.priority:Integer=100",
service = FragmentEntryProcessor.class
)
Thanks! That works. I pondered setting the priority to a higher value, but I after looking at the code, I assumed that the processTemplate code might be relevant.
One question though: Caching.
I noticed, that processFragmentEntryLinkHTML is called on each refresh. (I have developer settings here, so they might disable caching)
It seems a bit wasteful to me, since the result could easily be cached in alot of cases (not all of them, but often enough).
How is caching handled for fragments?
One question though: Caching.
I noticed, that processFragmentEntryLinkHTML is called on each refresh. (I have developer settings here, so they might disable caching)
It seems a bit wasteful to me, since the result could easily be cached in alot of cases (not all of them, but often enough).
How is caching handled for fragments?
Exactly that with the image gallery. 160 of the 7MB jpgs in one case. The image gallery plugin itself should be paged (it is now but 100% loads), it could instead load 10 at a time and on the 8th one lazy load the next 10.
I know of Adaptive Media but as ususal, the documentation around Liferay leaves a lot of stuff unused which then promotes the deprecation of unused features.
I'm curious to know if the deafult document thumbnails have become less fuzzy (I'll go check).
I know of Adaptive Media but as ususal, the documentation around Liferay leaves a lot of stuff unused which then promotes the deprecation of unused features.
I'm curious to know if the deafult document thumbnails have become less fuzzy (I'll go check).
There was a huge speed/quality improvement about 2 or 3 years ago due to a change in the ImageTool code. The images are quite good since then. Not sure, in which fixpack of 7.0 that change was pushed to the public.
We tried to use Adaptive Media and I wrote a rant to the developers about a year ago. Well, we tested it and used our own tool.
The problem with AM is that it doesn't take the width of the image into account, only the screensize. So, if want thumbnails e.g. in an image gallery, "<img width="100px", you get an image based on screensize. On 4K screens, you get 4K images ...
We tried to use Adaptive Media and I wrote a rant to the developers about a year ago. Well, we tested it and used our own tool.
The problem with AM is that it doesn't take the width of the image into account, only the screensize. So, if want thumbnails e.g. in an image gallery, "<img width="100px", you get an image based on screensize. On 4K screens, you get 4K images ...
Hey Christoph,
I would appreciate it if you could create a Feature Request explaining the improvement that you would expect from Audience Targeting.
And a second one for how you expect it should be integrated within fragments based on your experience with the custom development you are currently doing.
I would appreciate it if you could create a Feature Request explaining the improvement that you would expect from Audience Targeting.
And a second one for how you expect it should be integrated within fragments based on your experience with the custom development you are currently doing.
I wrote one where I specify the requirements for fragments. It is more of an epic and outlines the most important points.
https://issues.liferay.com/browse/LPS-94984
I think, to implement these, several changes to Adaptive Media are a requirement. The most important one is the possibility to request a specific image size (e.g. 345px) instead of specifying breakpoints in the AM config. Of course, this will lead to some frontent changes, but this is the gist of it.
https://issues.liferay.com/browse/LPS-94984
I think, to implement these, several changes to Adaptive Media are a requirement. The most important one is the possibility to request a specific image size (e.g. 345px) instead of specifying breakpoints in the AM config. Of course, this will lead to some frontent changes, but this is the gist of it.
Thanks so much!
Copyright © 2025 Liferay, Inc
• Privacy Policy
Powered by Liferay™