The MATLAB® language enables you to create programs using both procedural and object-oriented techniques and to use objects and ordinary functions together in your programs.
Larger program size: Object-oriented programs typically involve more lines of code than procedural programs. 3. Slower programs: Object-oriented programs are typically slower than procedure- based programs, as they typically require more instructions to be executed.
I've been working with OO MATLAB for a while, and ended up looking at similar performance issues.
The short answer is: yes, MATLAB's OOP is kind of slow. There is substantial method call overhead, higher than mainstream OO languages, and there's not much you can do about it. Part of the reason may be that idiomatic MATLAB uses "vectorized" code to reduce the number of method calls, and per-call overhead is not a high priority.
I benchmarked the performance by writing do-nothing "nop" functions as the various types of functions and methods. Here are some typical results.
>> call_nops Computer: PCWIN Release: 2009b Calling each function/method 100000 times nop() function: 0.02261 sec 0.23 usec per call nop1-5() functions: 0.02182 sec 0.22 usec per call nop() subfunction: 0.02244 sec 0.22 usec per call @()[] anonymous function: 0.08461 sec 0.85 usec per call nop(obj) method: 0.24664 sec 2.47 usec per call nop1-5(obj) methods: 0.23469 sec 2.35 usec per call nop() private function: 0.02197 sec 0.22 usec per call classdef nop(obj): 0.90547 sec 9.05 usec per call classdef obj.nop(): 1.75522 sec 17.55 usec per call classdef private_nop(obj): 0.84738 sec 8.47 usec per call classdef nop(obj) (m-file): 0.90560 sec 9.06 usec per call classdef class.staticnop(): 1.16361 sec 11.64 usec per call Java nop(): 2.43035 sec 24.30 usec per call Java static_nop(): 0.87682 sec 8.77 usec per call Java nop() from Java: 0.00014 sec 0.00 usec per call MEX mexnop(): 0.11409 sec 1.14 usec per call C nop(): 0.00001 sec 0.00 usec per call
Similar results on R2008a through R2009b. This is on Windows XP x64 running 32-bit MATLAB.
The "Java nop()" is a do-nothing Java method called from within an M-code loop, and includes the MATLAB-to-Java dispatch overhead with each call. "Java nop() from Java" is the same thing called in a Java for() loop and doesn't incur that boundary penalty. Take the Java and C timings with a grain of salt; a clever compiler could optimize the calls away completely.
The package scoping mechanism is new, introduced at about the same time as the classdef classes. Its behavior may be related.
A few tentative conclusions:
obj.nop()
syntax is slower than the nop(obj)
syntax, even for the same method on a classdef object. Same for Java objects (not shown). If you want to go fast, call nop(obj)
.Saying why this is so would just be speculation on my part. The MATLAB engine's OO internals aren't public. It's not an interpreted vs compiled issue per se - MATLAB has a JIT - but MATLAB's looser typing and syntax may mean more work at run time. (E.g. you can't tell from syntax alone whether "f(x)" is a function call or an index into an array; it depends on the state of the workspace at run time.) It may be because MATLAB's class definitions are tied to filesystem state in a way that many other languages' are not.
So, what to do?
An idiomatic MATLAB approach to this is to "vectorize" your code by structuring your class definitions such that an object instance wraps an array; that is, each of its fields hold parallel arrays (called "planar" organization in the MATLAB documentation). Rather than having an array of objects, each with fields holding scalar values, define objects which are themselves arrays, and have the methods take arrays as inputs, and make vectorized calls on the fields and inputs. This reduces the number of method calls made, hopefully enough that the dispatch overhead is not a bottleneck.
Mimicking a C++ or Java class in MATLAB probably won't be optimal. Java/C++ classes are typically built such that objects are the smallest building blocks, as specific as you can (that is, lots of different classes), and you compose them in arrays, collection objects, etc, and iterate over them with loops. To make fast MATLAB classes, turn that approach inside out. Have larger classes whose fields are arrays, and call vectorized methods on those arrays.
The point is to arrange your code to play to the strengths of the language - array handling, vectorized math - and avoid the weak spots.
EDIT: Since the original post, R2010b and R2011a have come out. The overall picture is the same, with MCOS calls getting a bit faster, and Java and old-style method calls getting slower.
EDIT: I used to have some notes here on "path sensitivity" with an additional table of function call timings, where function times were affected by how the Matlab path was configured, but that appears to have been an aberration of my particular network setup at the time. The chart above reflects the times typical of the preponderance of my tests over time.
EDIT (2/13/2012): R2011b is out, and the performance picture has changed enough to update this.
Arch: PCWIN Release: 2011b Machine: R2011b, Windows XP, 8x Core i7-2600 @ 3.40GHz, 3 GB RAM, NVIDIA NVS 300 Doing each operation 100000 times style total µsec per call nop() function: 0.01578 0.16 nop(), 10x loop unroll: 0.01477 0.15 nop(), 100x loop unroll: 0.01518 0.15 nop() subfunction: 0.01559 0.16 @()[] anonymous function: 0.06400 0.64 nop(obj) method: 0.28482 2.85 nop() private function: 0.01505 0.15 classdef nop(obj): 0.43323 4.33 classdef obj.nop(): 0.81087 8.11 classdef private_nop(obj): 0.32272 3.23 classdef class.staticnop(): 0.88959 8.90 classdef constant: 1.51890 15.19 classdef property: 0.12992 1.30 classdef property with getter: 1.39912 13.99 +pkg.nop() function: 0.87345 8.73 +pkg.nop() from inside +pkg: 0.80501 8.05 Java obj.nop(): 1.86378 18.64 Java nop(obj): 0.22645 2.26 Java feval('nop',obj): 0.52544 5.25 Java Klass.static_nop(): 0.35357 3.54 Java obj.nop() from Java: 0.00010 0.00 MEX mexnop(): 0.08709 0.87 C nop(): 0.00001 0.00 j() (builtin): 0.00251 0.03
I think the upshot of this is that:
foo(obj)
syntax. So method speed is no longer a reason to stick with old style classes in most cases. (Kudos, MathWorks!)I've reconstructed the benchmarking code and run it on R2014a.
Matlab R2014a on PCWIN64 Matlab 8.3.0.532 (R2014a) / Java 1.7.0_11 on PCWIN64 Windows 7 6.1 (eilonwy-win7) Machine: Core i7-3615QM CPU @ 2.30GHz, 4 GB RAM (VMware Virtual Platform) nIters = 100000 Operation Time (µsec) nop() function: 0.14 nop() subfunction: 0.14 @()[] anonymous function: 0.69 nop(obj) method: 3.28 nop() private fcn on @class: 0.14 classdef nop(obj): 5.30 classdef obj.nop(): 10.78 classdef pivate_nop(obj): 4.88 classdef class.static_nop(): 11.81 classdef constant: 4.18 classdef property: 1.18 classdef property with getter: 19.26 +pkg.nop() function: 4.03 +pkg.nop() from inside +pkg: 4.16 feval('nop'): 2.31 feval(@nop): 0.22 eval('nop'): 59.46 Java obj.nop(): 26.07 Java nop(obj): 3.72 Java feval('nop',obj): 9.25 Java Klass.staticNop(): 10.54 Java obj.nop() from Java: 0.01 MEX mexnop(): 0.91 builtin j(): 0.02 struct s.foo field access: 0.14 isempty(persistent): 0.00
Here's R2015b results, kindly provided by @Shaked. This is a big change: OOP is significantly faster, and now the obj.method()
syntax is as fast as method(obj)
, and much faster than legacy OOP objects.
Matlab R2015b on PCWIN64 Matlab 8.6.0.267246 (R2015b) / Java 1.7.0_60 on PCWIN64 Windows 8 6.2 (nanit-shaked) Machine: Core i7-4720HQ CPU @ 2.60GHz, 16 GB RAM (20378) nIters = 100000 Operation Time (µsec) nop() function: 0.04 nop() subfunction: 0.08 @()[] anonymous function: 1.83 nop(obj) method: 3.15 nop() private fcn on @class: 0.04 classdef nop(obj): 0.28 classdef obj.nop(): 0.31 classdef pivate_nop(obj): 0.34 classdef class.static_nop(): 0.05 classdef constant: 0.25 classdef property: 0.25 classdef property with getter: 0.64 +pkg.nop() function: 0.04 +pkg.nop() from inside +pkg: 0.04 feval('nop'): 8.26 feval(@nop): 0.63 eval('nop'): 21.22 Java obj.nop(): 14.15 Java nop(obj): 2.50 Java feval('nop',obj): 10.30 Java Klass.staticNop(): 24.48 Java obj.nop() from Java: 0.01 MEX mexnop(): 0.33 builtin j(): 0.15 struct s.foo field access: 0.25 isempty(persistent): 0.13
Here's R2018a results. It's not the huge jump that we saw when the new execution engine was introduced in R2015b, but it's still an appreciable year over year improvement. Notably, anonymous function handles got way faster.
Matlab R2018a on MACI64 Matlab 9.4.0.813654 (R2018a) / Java 1.8.0_144 on MACI64 Mac OS X 10.13.5 (eilonwy) Machine: Core i7-3615QM CPU @ 2.30GHz, 16 GB RAM nIters = 100000 Operation Time (µsec) nop() function: 0.03 nop() subfunction: 0.04 @()[] anonymous function: 0.16 classdef nop(obj): 0.16 classdef obj.nop(): 0.17 classdef pivate_nop(obj): 0.16 classdef class.static_nop(): 0.03 classdef constant: 0.16 classdef property: 0.13 classdef property with getter: 0.39 +pkg.nop() function: 0.02 +pkg.nop() from inside +pkg: 0.02 feval('nop'): 15.62 feval(@nop): 0.43 eval('nop'): 32.08 Java obj.nop(): 28.77 Java nop(obj): 8.02 Java feval('nop',obj): 21.85 Java Klass.staticNop(): 45.49 Java obj.nop() from Java: 0.03 MEX mexnop(): 3.54 builtin j(): 0.10 struct s.foo field access: 0.16 isempty(persistent): 0.07
No significant changes. I'm not bothering to include the test results.
Looks like classdef objects have gotten significantly faster again. But structs have gotten slower.
Matlab R2021a on MACI64 Matlab 9.10.0.1669831 (R2021a) Update 2 / Java 1.8.0_202 on MACI64 Mac OS X 10.14.6 (eilonwy) Machine: Core i7-3615QM CPU @ 2.30GHz, 4 cores, 16 GB RAM nIters = 100000 Operation Time (μsec) nop() function: 0.03 nop() subfunction: 0.04 @()[] anonymous function: 0.14 nop(obj) method: 6.65 nop() private fcn on @class: 0.02 classdef nop(obj): 0.03 classdef obj.nop(): 0.04 classdef pivate_nop(obj): 0.03 classdef class.static_nop(): 0.03 classdef constant: 0.16 classdef property: 0.12 classdef property with getter: 0.17 +pkg.nop() function: 0.02 +pkg.nop() from inside +pkg: 0.02 feval('nop'): 14.45 feval(@nop): 0.59 eval('nop'): 23.59 Java obj.nop(): 30.01 Java nop(obj): 6.80 Java feval('nop',obj): 18.17 Java Klass.staticNop(): 16.77 Java obj.nop() from Java: 0.02 MEX mexnop(): 2.51 builtin j(): 0.21 struct s.foo field access: 0.29 isempty(persistent): 0.26
I've put the source code for these benchmarks up on GitHub, released under the MIT License. https://github.com/apjanke/matlab-bench
The handle class has an additional overhead from tracking all of references to itself for cleanup purposes.
Try the same experiment without using the handle class and see what your results are.
OO performance depends significantly on the MATLAB version used. I cannot comment on all versions, but know from experience that 2012a is much improved over 2010 versions. No benchmarks and so no numbers to present. My code, exclusively written using handle classes and written under 2012a will not run at all under earlier versions.
Actually no problem with your code but it is a problem with Matlab. I think in it is a kind of playing around to look like. It is nothing than overhead to compile the class code. I have done the test with simple class point (once as handle) and the other (once as value class)
classdef Pointh < handle
properties
X
Y
end
methods
function p = Pointh (x,y)
p.X = x;
p.Y = y;
end
function d = dist(p,p1)
d = (p.X - p1.X)^2 + (p.Y - p1.Y)^2 ;
end
end
end
here is the test
%handle points
ph = Pointh(1,2);
ph1 = Pointh(2,3);
%values points
p = Pointh(1,2);
p1 = Pointh(2,3);
% vector points
pa1 = [1 2 ];
pa2 = [2 3 ];
%Structur points
Ps.X = 1;
Ps.Y = 2;
ps1.X = 2;
ps1.Y = 3;
N = 1000000;
tic
for i =1:N
ph.dist(ph1);
end
t1 = toc
tic
for i =1:N
p.dist(p1);
end
t2 = toc
tic
for i =1:N
norm(pa1-pa2)^2;
end
t3 = toc
tic
for i =1:N
(Ps.X-ps1.X)^2+(Ps.Y-ps1.Y)^2;
end
t4 = toc
The results t1 =
12.0212 % Handle
t2 =
12.0042 % value
t3 =
0.5489 % vector
t4 =
0.0707 % structure
Therefore for efficient performance avoid using OOP instead structure is good choice to group variables
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With