对于我的一个类,我最近遇到了一个 ruby和 python实现,使用miller-rabin算法来识别20到29000之间的素数.我很好奇为什么,即使它们看起来是相同的实现,python代码运行得更快.我已经读过python通常
miller_rabin.rb
def miller_rabin(m,k)
t = (m-1)/2;
s = 1;
while(t%2==0)
t/=2
s+=1
end
for r in (0...k)
b = 0
b = rand(m) while b==0
prime = false
y = (b**t) % m
if(y ==1)
prime = true
end
for i in (0...s)
if y == (m-1)
prime = true
break
else
y = (y*y) % m
end
end
if not prime
return false
end
end
return true
end
count = 0
for j in (20..29000)
if(j%2==1 and miller_rabin(j,2))
count+=1
end
end
puts count
miller_rabin.py:
import math
import random
def miller_rabin(m, k):
s=1
t = (m-1)/2
while t%2 == 0:
t /= 2
s += 1
for r in range(0,k):
rand_num = random.randint(1,m-1)
y = pow(rand_num, t, m)
prime = False
if (y == 1):
prime = True
for i in range(0,s):
if (y == m-1):
prime = True
break
else:
y = (y*y)%m
if not prime:
return False
return True
count = 0
for j in range(20,29001):
if j%2==1 and miller_rabin(j,2):
count+=1
print count
当我在Windows Powershell中使用Measure-Command测量每个的执行时间时,我得到以下结果:
Python 2.7:
蜱虫:4874403
总毫秒数:487.4403
Ruby 1.9.3:
蜱虫:682232430
总毫秒数:68223.243
我很感激任何人都可以告诉我他们为什么会有如此巨大的差异
我认为这些个人资料结果应该回答您的问题:%self total self wait child calls name 96.81 43.05 43.05 0.00 0.00 17651 Fixnum#** 1.98 0.88 0.88 0.00 0.00 17584 Bignum#% 0.22 44.43 0.10 0.00 44.33 14490 Object#miller_rabin 0.11 0.05 0.05 0.00 0.00 32142 <Class::Range>#allocate 0.11 0.06 0.05 0.00 0.02 17658 Kernel#rand 0.08 44.47 0.04 0.00 44.43 32142 *Range#each 0.04 0.02 0.02 0.00 0.00 17658 Kernel#respond_to_missing? 0.00 44.47 0.00 0.00 44.47 1 Kernel#load 0.00 44.47 0.00 0.00 44.47 2 Global#[No method] 0.00 0.00 0.00 0.00 0.00 2 IO#write 0.00 0.00 0.00 0.00 0.00 1 Kernel#puts 0.00 0.00 0.00 0.00 0.00 1 IO#puts 0.00 0.00 0.00 0.00 0.00 2 IO#set_encoding 0.00 0.00 0.00 0.00 0.00 1 Fixnum#to_s 0.00 0.00 0.00 0.00 0.00 1 Module#method_added
与Python相比,Ruby的**运算符看起来很慢.
看起来(b ** t)通常太大而无法在Fixnum中修复,所以你使用Bignum(或任意精度)算法,这要快得多.
