After much looking, I found a bit of code that converts a BufferedImage
to a SWT Image (don't bother reading it yet):
public static ImageData convertToSWT(BufferedImage bufferedImage) {
if (bufferedImage.getColorModel() instanceof DirectColorModel) {
DirectColorModel colorModel = (DirectColorModel) bufferedImage.getColorModel();
PaletteData palette = new PaletteData(
colorModel.getRedMask(),
colorModel.getGreenMask(),
colorModel.getBlueMask()
);
ImageData data = new ImageData(
bufferedImage.getWidth(),
bufferedImage.getHeight(), colorModel.getPixelSize(),
palette
);
WritableRaster raster = bufferedImage.getRaster();
int[] pixelArray = new int[3];
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
raster.getPixel(x, y, pixelArray);
int pixel = palette.getPixel(
new RGB(pixelArray[0], pixelArray[1], pixelArray[2])
);
data.setPixel(x, y, pixel);
}
}
return data;
} else if (bufferedImage.getColorModel() instanceof IndexColorModel) {
IndexColorModel colorModel = (IndexColorModel) bufferedImage.getColorModel();
int size = colorModel.getMapSize();
byte[] reds = new byte[size];
byte[] greens = new byte[size];
byte[] blues = new byte[size];
colorModel.getReds(reds);
colorModel.getGreens(greens);
colorModel.getBlues(blues);
RGB[] rgbs = new RGB[size];
for (int i = 0; i < rgbs.length; i++) {
rgbs[i] = new RGB(reds[i] & 0xFF, greens[i] & 0xFF, blues[i] & 0xFF);
}
PaletteData palette = new PaletteData(rgbs);
ImageData data = new ImageData(
bufferedImage.getWidth(),
bufferedImage.getHeight(),
colorModel.getPixelSize(),
palette
);
data.transparentPixel = colorModel.getTransparentPixel();
WritableRaster raster = bufferedImage.getRaster();
int[] pixelArray = new int[1];
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
raster.getPixel(x, y, pixelArray);
data.setPixel(x, y, pixelArray[0]);
}
}
return data;
}
return null;
}
(found here: http://www.java2s.com/Code/Java/SWT-JFace-Eclipse/ConvertsabufferedimagetoSWTImageData.htm).
I've tested it, and it works just fine. The problem is that I don't understand it (my best guess is that it uses the raw data interfaces of both to make the transfer). It occurred to me that a much simpler solution would be to write the BufferedImage
out to ByteArrayOutputStream
, and then read it back into a SWT Image withByteArrayInputStream
. Are there any problems with this solution? What about speed?
This conversion is necessary because all of the image resizing libraries out there are for AWT, and yet I'm displaying the image with SWT.
Thanks!
This is a more complete version... The one posted in the question does not work for me.
/**
* snippet 156: convert between SWT Image and AWT BufferedImage.
* <p>
* For a list of all SWT example snippets see
* http://www.eclipse.org/swt/snippets/
*/
public static ImageData convertToSWT(BufferedImage bufferedImage) {
if (bufferedImage.getColorModel() instanceof DirectColorModel) {
/*
DirectColorModel colorModel = (DirectColorModel)bufferedImage.getColorModel();
PaletteData palette = new PaletteData(
colorModel.getRedMask(),
colorModel.getGreenMask(),
colorModel.getBlueMask());
ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(),
colorModel.getPixelSize(), palette);
WritableRaster raster = bufferedImage.getRaster();
int[] pixelArray = new int[3];
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
raster.getPixel(x, y, pixelArray);
int pixel = palette.getPixel(new RGB(pixelArray[0], pixelArray[1], pixelArray[2]));
data.setPixel(x, y, pixel);
}
}
*/
DirectColorModel colorModel = (DirectColorModel)bufferedImage.getColorModel();
PaletteData palette = new PaletteData(
colorModel.getRedMask(),
colorModel.getGreenMask(),
colorModel.getBlueMask());
ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(),
colorModel.getPixelSize(), palette);
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
int rgb = bufferedImage.getRGB(x, y);
int pixel = palette.getPixel(new RGB((rgb >> 16) & 0xFF, (rgb >> 8) & 0xFF, rgb & 0xFF));
data.setPixel(x, y, pixel);
if (colorModel.hasAlpha()) {
data.setAlpha(x, y, (rgb >> 24) & 0xFF);
}
}
}
return data;
}
else if (bufferedImage.getColorModel() instanceof IndexColorModel) {
IndexColorModel colorModel = (IndexColorModel)bufferedImage.getColorModel();
int size = colorModel.getMapSize();
byte[] reds = new byte[size];
byte[] greens = new byte[size];
byte[] blues = new byte[size];
colorModel.getReds(reds);
colorModel.getGreens(greens);
colorModel.getBlues(blues);
RGB[] rgbs = new RGB[size];
for (int i = 0; i < rgbs.length; i++) {
rgbs[i] = new RGB(reds[i] & 0xFF, greens[i] & 0xFF, blues[i] & 0xFF);
}
PaletteData palette = new PaletteData(rgbs);
ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(),
colorModel.getPixelSize(), palette);
data.transparentPixel = colorModel.getTransparentPixel();
WritableRaster raster = bufferedImage.getRaster();
int[] pixelArray = new int[1];
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
raster.getPixel(x, y, pixelArray);
data.setPixel(x, y, pixelArray[0]);
}
}
return data;
}
else if (bufferedImage.getColorModel() instanceof ComponentColorModel) {
ComponentColorModel colorModel = (ComponentColorModel)bufferedImage.getColorModel();
//ASSUMES: 3 BYTE BGR IMAGE TYPE
PaletteData palette = new PaletteData(0x0000FF, 0x00FF00,0xFF0000);
ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(),
colorModel.getPixelSize(), palette);
//This is valid because we are using a 3-byte Data model with no transparent pixels
data.transparentPixel = -1;
WritableRaster raster = bufferedImage.getRaster();
int[] pixelArray = new int[3];
for (int y = 0; y < data.height; y++) {
for (int x = 0; x < data.width; x++) {
raster.getPixel(x, y, pixelArray);
int pixel = palette.getPixel(new RGB(pixelArray[0], pixelArray[1], pixelArray[2]));
data.setPixel(x, y, pixel);
}
}
return data;
}
return null;
}
The complexity of the code is mainly due to the two possible color models of BufferedImage
. I don't think you can improve much on this. First of all, the use of an intermediate Stream
will require the two image systems have a common format, and the conversion to/from a Stream
is definitely going to be way slower that the current code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With