Please look at the following code to solve the same set of problem, I do not think that mentioning the problem would anyhow help the purpose, it's yet another iteration of the Josephus problem:
Solution 1:
import sys
from math import log
cases= int(sys.stdin.readline())
current= 0
while current < cases:
current += 1
n = int(sys.stdin.readline())
print 2*(n - 2**(int(log(n,2))))+1
This solution solves the given 10 test cases in cumulative 1.0912 seconds and consumes 4360KiB of memory.
Solution 2:
def josephus_2( n ):
from math import log
return 2*(n - 2**(int(log(n,2))))+1
import sys
cases= int(sys.stdin.readline())
current= 0
while current < cases:
current += 1
n = int(sys.stdin.readline())
print josephus_2( n )
this solution solves the same 10 test cases in a total of 1.0497 seconds and 640KiB of memory.
Being a Python n00b I was wondering, while according to the online judge I earn same points for both, but what makes the solution 2 more faster than 1 and much more memory efficient? I know that the time difference can sound very less, but at the same time ranks me first on the fastest solution, even faster than c/c++/perl submissions
Can this Screenshot help?
Internally Python code is interpreted during run time rather than being compiled to native code hence it is a bit slower. Running of Python script v/s running of C/C++ code: Python: First it is compiled into Byte Code. This Byte Code is then interpreted and executed by the PVM (Python Virtual Machine).
Python programs are generally expected to run slower than Java programs, but they also take much less time to develop. Python programs are typically 3-5 times shorter than equivalent Java programs. This difference can be attributed to Python's built-in high-level data types and its dynamic typing.
C++ is faster than Python because it is statically typed, which leads to a faster compilation of code. Python is slower than C++, it supports dynamic typing, and it also uses the interpreter, which makes the process of compilation slower.
In my old experiences I remember that I had found that putting computation parts in a function (method) sometime may improve the performance:
I just re-experienced using the following simple code:
n = 1000000
def compute(n):
j = 0
for i in xrange(n):
j += 1
tic()
compute(n)
toc()
>>> 0.271 s
tic()
j = 0
for i in xrange(n):
j += 1
toc()
>>> 0.849 s
The result shows 0.271s for the first (using compute) vs 0.849s as inline code. This is noticeable improvement without changing anything in the main computation part! So the point is using a method may improve the performance.
Here are the code you may use for performance comparison:
from __future__ import division
from time import clock
def compute(n):
j = 0
for i in xrange(n):
j += 1
n = 1000000
print 'solution (2) using a method call...'
t0 = clock()
compute(n)
print clock()-t0
#>>> ~0.2415... #faster (solution 2)
print 'solution (1) all inline code...'
t0 = clock()
j = 0
for i in xrange(n):
j += 1
print clock()-t0
#>>> ~0.6169... #slower (solution 1)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With