Issue
I'm currently working on a project, where I have to deal with realy big images (>> 100mb). The images are in the format of a raw byte array (for now only grayscale images, later color image would have a byte array for each channel).
I want to show the image in a JavaFX imageView. So I have to convert the given "raw" image data to a JavaFX image in as less time as possible.
I've tried a lot of solutions, which I found here and stackoverflow and from other sources.
SwingFXUtils
The most popular (and easy) solution is construct a BufferedImage from the raw Data and convert it to a JavaFX Image using SwingFXUtils.toFXImage(...)
.
On an image with around 100mb (8184*12000) I measured the following times.
- The BufferedImage is created in ca. 20-40ms.
- The conversion to a JavaFX Image via
SwingFXUtils.toFXImage(...)
takes more than 700ms. Which is too much for my needs.
Encode image and read it as a ByteArrayInputStream
One approach I've found here (https://stackoverflow.com/a/33605064/3237961) is to use OpenCVs functionality to encode the image to a format like *.bmp and construct a JavaFX Image directly from the ByteArray.
This solution is way more complex (OpenCV or some other encoding library/algorithm is needed) and encoding seems to add additional computation steps.
Question
So I'm looking for a more efficient way of doing this. Mostly all solutions use the SwingFXUtils in the end, or solve the problem by iterating over all pixels to convert them (which is the slowest possible solution).
Is there a way, to either implement a more efficient function than SwingFXUtils.toFXImage(...)
or construct a JavaFX image directly from the byte array.
Maybe there is also a way, to draw a BufferedImage directly in JavaFX. Because IMHO the JavaFX Image doesn't bring any advantages and it only makes things complicated.
Thanks for your replys.
Solution
Distilling the information from the comments, there appear to be two viable options for this, both using a WritableImage
.
In one, you can use the PixelWriter
to set the pixels in the image, using the original byte data and a BYTE_INDEXED
PixelFormat
. The following demos this approach. Generating the actual byte array data here takes ~2.5 seconds on my system; creating the (big) WritableImage
takes about 0.15 seconds, drawing the data into the image about 0.12 seconds.
import java.nio.ByteBuffer;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.image.ImageView;
import javafx.scene.image.PixelFormat;
import javafx.scene.image.WritableImage;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
public class App extends Application {
@Override
public void start(Stage stage) {
int width = 12000 ;
int height = 8184 ;
byte[] data = new byte[width*height];
int[] colors = new int[256];
for (int i = 0 ; i < colors.length ; i++) {
colors[i] = (255<<24) | (i << 16) | (i << 8) | i ;
}
PixelFormat<ByteBuffer> format = PixelFormat.createByteIndexedInstance(colors);
long start = System.nanoTime();
for (int y = 0 ; y < height ; y++) {
for (int x = 0 ; x < width; x++) {
long dist2 = (1L * x - width/2) * (x- width/2) + (y - height/2) * (y-height/2);
double dist = Math.sqrt(dist2);
double val = (1 + Math.cos(Math.PI * dist / 1000)) / 2;
data[x + y * width] = (byte)(val * 255);
}
}
long imageDataCreated = System.nanoTime();
WritableImage img = new WritableImage(width, height);
long imageCreated = System.nanoTime();
img.getPixelWriter().setPixels(0, 0, width, height, format, data, 0, width);
long imageDrawn = System.nanoTime() ;
ImageView imageView = new ImageView();
imageView.setPreserveRatio(true);
imageView.setImage(img);
long imageViewCreated = System.nanoTime();
BorderPane root = new BorderPane(imageView);
imageView.fitWidthProperty().bind(root.widthProperty());
imageView.fitHeightProperty().bind(root.heightProperty());
Scene scene = new Scene(root, 800, 800);
stage.setScene(scene);
stage.show();
long stageShowCalled = System.nanoTime();
double nanosPerMilli = 1_000_000.0 ;
System.out.printf(
"Data creation time: %.3f%n"
+ "Image Creation Time: %.3f%n"
+ "Image Drawing Time: %.3f%n"
+ "ImageView Creation Time: %.3f%n"
+ "Stage Show Time: %.3f%n",
(imageDataCreated-start)/nanosPerMilli,
(imageCreated-imageDataCreated)/nanosPerMilli,
(imageDrawn-imageCreated)/nanosPerMilli,
(imageViewCreated-imageDrawn)/nanosPerMilli,
(stageShowCalled-imageViewCreated)/nanosPerMilli);
}
public static void main(String[] args) {
launch();
}
}
The (crude) profiling on my system gives
Data creation time: 2414.017
Image Creation Time: 157.013
Image Drawing Time: 122.539
ImageView Creation Time: 15.626
Stage Show Time: 132.433
The other approach is to use a PixelBuffer
. It appears PixelBuffer
does not support indexed colors, so here there is no option but to convert the byte array data to array data representing ARGB values. Here I use a ByteBuffer
where the rgb values are repeated as bytes, and the alpha is always set to 0xff
:
import java.nio.ByteBuffer;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.image.ImageView;
import javafx.scene.image.PixelBuffer;
import javafx.scene.image.PixelFormat;
import javafx.scene.image.WritableImage;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
public class App extends Application {
@Override
public void start(Stage stage) {
int width = 12000 ;
int height = 8184 ;
byte[] data = new byte[width*height];
PixelFormat<ByteBuffer> format = PixelFormat.getByteBgraPreInstance();
long start = System.nanoTime();
for (int y = 0 ; y < height ; y++) {
for (int x = 0 ; x < width; x++) {
long dist2 = (1L * x - width/2) * (x- width/2) + (y - height/2) * (y-height/2);
double dist = Math.sqrt(dist2);
double val = (1 + Math.cos(Math.PI * dist / 1000)) / 2;
data[x + y * width] = (byte)(val * 255);
}
}
long imageDataCreated = System.nanoTime();
byte alpha = (byte) 0xff ;
byte[] convertedData = new byte[4*data.length];
for (int i = 0 ; i < data.length ; i++) {
convertedData[4*i] = convertedData[4*i+1] = convertedData[4*i+2] = data[i] ;
convertedData[4*i+3] = alpha ;
}
long imageDataConverted = System.nanoTime() ;
ByteBuffer buffer = ByteBuffer.wrap(convertedData);
WritableImage img = new WritableImage(new PixelBuffer<ByteBuffer>(width, height, buffer, format));
long imageCreated = System.nanoTime();
ImageView imageView = new ImageView();
imageView.setPreserveRatio(true);
imageView.setImage(img);
long imageViewCreated = System.nanoTime();
BorderPane root = new BorderPane(imageView);
imageView.fitWidthProperty().bind(root.widthProperty());
imageView.fitHeightProperty().bind(root.heightProperty());
Scene scene = new Scene(root, 800, 800);
stage.setScene(scene);
stage.show();
long stageShowCalled = System.nanoTime();
double nanosPerMilli = 1_000_000.0 ;
System.out.printf(
"Data creation time: %.3f%n"
+ "Data Conversion Time: %.3f%n"
+ "Image Creation Time: %.3f%n"
+ "ImageView Creation Time: %.3f%n"
+ "Stage Show Time: %.3f%n",
(imageDataCreated-start)/nanosPerMilli,
(imageDataConverted-imageDataCreated)/nanosPerMilli,
(imageCreated-imageDataConverted)/nanosPerMilli,
(imageViewCreated-imageCreated)/nanosPerMilli,
(stageShowCalled-imageViewCreated)/nanosPerMilli);
}
public static void main(String[] args) {
launch();
}
}
The timings for this are pretty similar:
Data creation time: 2870.022
Data Conversion Time: 273.861
Image Creation Time: 4.381
ImageView Creation Time: 15.043
Stage Show Time: 130.475
Obviously this approach, as written, consumes more memory. There may be a way to create a custom implementation of ByteBuffer
that simply looks into the underlying byte array and generates the correct values without the redundant data storage. Depending on your exact use case, this may be more efficient (if you can reuse the converted data, for example).
Answered By - James_D