微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 4759|回复: 0
打印 上一主题 下一主题
收起左侧

1970 詹姆斯·哈迪·威尔金森

[复制链接]
跳转到指定楼层
1
发表于 2022-4-15 21:38:13 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
J. H. Wilkinson

PHOTOGRAPHS
BIRTH:
September 27, 1919, Strood, England

DEATH:
October 5, 1986, Teddington, England

EDUCATION:
Sir Joseph Williamson’s Mathematical School, Rochester, England; Trinity College Cambridge (First Class Honors, 1939)

EXPERIENCE:
British Government Ministry of Supply (1940–1946); National Physical Laboratory (1946–retirement in 1980); he also held several visiting positions at major American universities

HONORS AND AWARDS:
DSc, Cambridge (1963); Fellow of the Royal Society (1969); SIAM John von Neumann Lecturer (1970); ACM Alan M. Turing Award (1970); American Mathematical Society Chauvenet Prize (1987). The J. H. Wilkinson Prize for Numerical Software is named in his honor.

JAMES HARDY ("JIM") WILKINSON DL Author Profile link
United Kingdom – 1970
CITATION
For his research in numerical analysis to facilitiate the use of the high-speed digital computer, having received special recognition for his work in computations in linear algebra and "backward" error analysis.

SHORT ANNOTATED
BIBLIOGRAPHY
ACM TURING AWARD
LECTURE
RESEARCH
SUBJECTS
ADDITIONAL
MATERIALS


James (Jim) Hardy Wilkinson was a British mathematician who became the leading expert in a new, and important, field that emerged after World War II. It goes under two names, matrix computations and (more pompously) numerical linear algebra.

Jim came from a humble family in the dairy business and was the third of five children. He showed a great interest in mathematical problems as a youngster and won a Foundation Scholarship to the Sir Joseph Williamson’s Mathematical School when he was eleven—a fortunate occurrence because the family business had fallen on hard times and they could not have paid for such an elite school.

Wilkinson graduated from Trinity College, Cambridge, the mecca of mathematics in Britain, in 1939 after a stellar undergraduate career. He was expected to become a pure analyst and continue with graduate work at Cambridge, but the outbreak of World War II put an end to that plan. He began working at the British Government Ministry of Supply doing research on topics such as thermodynamics of explosions, ballistics, and similar military interests. This work during the war changed his focus radically.

At the end of the war, rather than returning to Cambridge, he joined the National Physical Laboratory (NPL) staff where he worked with Alan Turing on problems associated with Turing’s proposal to build an electronic computer. When Turing left the NPL to go to Manchester University, Wilkinson (with others) took over the development of Turing’s computer—the Pilot ACE (operational in 1950).

This essay will attempt to explain, in as simple a way as possible (although some knowledge of mathematics might be helpful) the concerns of this field of matrix computations and why Wilkinson won the ACM A. W. Turing Award in 1970.

The field would not exist without the availability of fast digital computers. Although a few classical mathematicians, such as Carl Friedrich Gauss (1777-1855) and Karl G. J. Jacobi (1804-1851), had engaged in matrix computations by hand (a labor of Hercules) there was no shared field of study.

Let us consider these electronic computing machines for a moment.

People are awed at the prodigious speeds at which they execute primitive arithmetic operations such as addition and multiplication. Yet this speed is achieved at a price, almost every answer is wrong! Not many people are aware of this property. It is not due to difficulties in approximation. It is due to an iron law: all numbers used in a computer shall have a fixed number of digits. More precisely, the numbers we are talking about are usually floating point numbers that have two parts, a fraction and an exponent. It is the fractional parts with which we are concerned here. The exponents are also constrained and this can cause a condition of overflow or underflow (the numbers become too big or too small and cannot be represented in the machine) which is vexing but not our concern in this essay. To get a feel for the constraint on the fractional parts, consider them as restricted to 16 digits (this figure will vary depending on the computer in use). In Wilkinson's first machine the figure was 9 digits. That seems generous enough for practical purposes, even too generous, and for many calculations it is overkill. Nevertheless the product of two 16 digit fractions needs 32 digits to represent it and the computer always throws away the last (least significant) 16 of them. The error, more precisely the relative error, is miniscule. For calculations involving physical quantities, such as pressure or velocity, the given precision is usually adequate but in applications such as astronomy it may not be. The name for these miniscule errors is, appropriately, roundoff error, the computer “rounds off” the results of all arithmetic operations. The study of the effect of these errors is called roundoff error analysis.

A dramatic demonstration of the catastrophic effect of roundoff error can be obtained with a simple hand held calculator. Enter a positive integer between 2 and 9 (say 5). Press the square root key a large number of times (20 to 40 depending on the sophistication of the calculator) and then press the key for squaring an equal number of times. The original integer would be returned in exact arithmetic but the calculator will usually (depending on how many decimal digits it retains after each calculation) return either1or an approximation to 5 (e.g., 4.9998) instead. The differences are due to accumulating roundoff error. No subtractions are involved and each arithmetic error is tiny in a relative sense, but they accumulate if enough operations are performed.

Wilkinson pioneered, but did not invent, successful error analyses of all the matrix algorithms of his day. More on that topic later. Before leaving this description of round off error it should be stressed that in most of scientific and engineering computations roundoff error is only one, and a minor one at that, of the sources of error in the output. Other types of error are data uncertainty, approximation error, and discretation error when dealing with derivatives. However matrix computations are distinguished by the fact that for the so-called direct methods roundoff error is the only source of error.

Another striking feature of matrix computations is that it deals with objects that are not natural physical quantities such as pressure or temperature. Determinants and polynomials are wild. It was only gradually appreciated that the value of a polynomial of degree 1000 or more overflows in the computer unless the evaluation point is very close to a zero of the polynomial. Numbers with a large range of exponent occur very easily.

With that background to computers and to roundoff error let us return to the state of the pioneers in the use of these new devices. Long before the computer age scientists had approximated derivatives by difference quotients. After all a derivative is just the limit of a sequence of divided differences, (f(x+h) - f(x))/((x+h) – x), for small h. These details are not important but the great reward for making these approximations is that differential equations (difficult) turn into difference equations and these are disguised forms of algebraic equations. When the original differential equation is linear these difference equations turn into systems of linear algebraic equations (which are usually considered easy). In modern language a set of linear algebraic equations in unknowns called x1, x2, ... is called a matrix equation and written Ax = b, where the vector b holds the right hand sides, the vector x holds the unknowns and A is a matrix. For our purposes a matrix is a rectangular array of numbers. Readers who have not been exposed to matrices should just tell themselves that they are an important notational invention allowing a mass of detail to be compressed in a few symbols. The purpose of this paragraph is to hint at how a host of diverse scientific and engineering problems boil down to solving a few standard matrix problems. Matrix computations are almost never of interest for their own sake—they usually occur as intermediate steps in a bigger process.

It was immediately apparent to all involved in building automatic digital computers that it if these machines could solve systems of equations with, say, hundreds of unknowns then engineering models could be made much more realistic with huge benefits to technology. Direct methods for solving Ax = b had been refined for humans with handheld computing devices since the 1930s and it did not seem too much of a challenge to automate these methods. The only cloud on the horizon was the threat of all those roundoff errors. Might they undermine the hopes inspired by the new devices? Mathematicians of the greatest power (e.g., John von Neumann in the U.S.A. and Alan Turing in England), thought hard about these problems. Remember that no human being had ever solved 50 equations in 50 unknowns by a direct method. Here was unknown territory. A very senior statistician, Harold T. Hotelling, came up with a very pessimistic analysis; it seemed that the errors could accumulate at an exponential rate. Turing was pessimistic. In the U.S.A. von Neumann and Herman H. Goldstine came up with a seminal analysis and a clever, but complicated solution. In technical terms they proposed solving the normal equations instead of Ax = b. It is not essential to know what the normal equations are, but for those who have been exposed to matrix theory they are A'Ax = A'b where A' is the transpose of A.

By this time Wilkinson was the chief assistant to Alan Turing at the National Physical Laboratory which was charged with building one of Britain's earliest automatic general purpose digital computer, the Pilot ACE (Automatic Computing Engine), and of course they were giving much thought to the problems it should grapple with initially. In fact Wilkinson designed (and built) the multiplication unit for the Pilot ACE. And experimented by writing numerical programs (in an attempt to see what problems presented themselves) for the machine even before it was built.

Wilkinson had one great advantage over von Neumann and Turing. He had been obliged to solve 18 equations in 18 unknowns, with a hand cranked mechanical calculator, during WWII. He had seen how amazingly accurate the direct method was. That arduous exercise helped Wilkinson think in a different way from these other experts in accounting for those troublesome roundoff errors: stop obsessing over how large the error in the final output might be and ask instead how little could the data be changed so that the output is exactly correct? This is the fruitful question.

Another task where Wilkinson made major contributions was the eigenvalue problem. Here the matrix A is square (same number of rows as columns) and the task is to find those few (very special) values e so that A - eI is not invertible. Here I is the identity matrix whose special property is that Iv = v for all vectors v. Formally this problem is related to the every old problem of finding the zeros of a given polynomial. Wilkinson showed that it is most unwise to reduce the eigenvalue problem to the problem of finding the zeros of its characteristic polynomial despite the fact that this reduction is the natural approach of a mathematician. Once grasped the reason is simple; the zeros of a polynomial are, almost always, extremely sensitive to tiny changes in the coefficients of the polynomial. Indeed a big effect of the arrival of automatic computing devices on mathematics, and engineering, was to boost the importance of Perturbation Theory to an essential feature of the field of Matrix Computations. The invention of algorithms to solve the matrix eigenvalue problem is an interesting topic but a bit too technical for this essay. Wilkinson guided its development in its first three decades (1950 - 1980). A key idea is to change the given matrix, to make it closer to triangular, without changing the eigenvalues. That requirement severely constrains the changes that can be made to the given matrix.

The great rivals to direct methods for solving Ax=b are iterative methods. These methods, developed with great success by Richard Southwell during WWI, do not aim to get the exact answer, instead they aim to improve an approximate solution at each step. The process continues until the approximation z is deemed satisfactory because its residual b - Az is small enough for the user's needs. Often these needs were modest and 3 correct decimal digits would suffice. So the experts were inclined to put their money on these iterative methods. The serious flaw in iterative methods is that they were not applicable to the general case; they required special properties of the matrix A.

The 1950s were the time of exploration. Programming was an ordeal until compilers came along late in the decade. However Wilkinson was able to try various methods for various matrix problems on the Pilot ACE and learn very valuable lessons.

Without too much distortion we could terminate this essay by saying that Wilkinson won the Turing prize for showing that the fears of the experts were unfounded, for understanding precisely the role of roundoff error in matrix computations, and for showing a way to make it all look rather easy. His tool was the so-called backward error analysis. Instead of concentrating on the size of the error, find the problem which the computed solution solved exactly! For the Ax = b problem let the computer output be z. Look for a small matrix E and a small vector e such that (A + E)z = b + e. If the bounds on E and e are really tiny should we not be satisfied? After all the entries in A and b may well be uncertain. If the error is still large then the problem itself, i.e. the pair (A, b), must be close to being illposed.

In fact Wilkinson did a great deal more than has been indicated above. He appreciated that there were often subtle but important differences in the way that a method is implemented. For example the order in which the squares of the entries of a vector are added up (top down or bottom up) can make a difference to the computed norm of the vector.

Here is a very simple example of an issue which is of no importance whatsoever in exact arithmetic but can make a significant difference when a computer is used: for numbers a and b, a2 – b2 = (a - b)(a + b) in exact arithmetic but not in the computer. For example, if a and b are very close in value then, a computer performing the operation (a – b) might calculate the result as zero (because the computer can only store a limited number of digits for each number). This would result in the right hand side of the above equation being (0)(a + b) which is always zero. On the other hand, first calculating a2 and then b2, then doing the subtraction, might well result in a non-zero value. Which form should one use when programming?

By the time he retired, Jim had been given the rank of Special Merit Chief Scientific Officer within the British Civil Service, a category which is very rarely used and allows one to continue with their research without having to worry about other duties.

In addition to his technical expertise Wilkinson was an excellent expositor with an informal style.

In 1945 Jim married Heather Nora Ware (also a mathematician) and they had a son and a daughter.

Author: Beresford Neill Parlett



J. H. Wilkinson

照片
出生地。
1919年9月27日,英国斯特罗德

逝世
1986年10月5日,英国泰丁顿

学历:英国罗切斯特约瑟夫-威廉姆森爵士数学学校
约瑟夫-威廉姆森爵士的数学学校,英国罗切斯特;剑桥大学三一学院(一等荣誉,1939)。

经历:英国政府供应部
英国政府供应部(1940-1946);国家物理实验室(1946-1980年退休);他还在美国主要大学担任过几个访问职位

荣誉和奖项。
剑桥大学理学博士(1963年);英国皇家学会会员(1969年);SIAM约翰-冯-诺伊曼讲师(1970年);ACM艾伦-M-图灵奖(1970年);美国数学学会Chauvenet奖(1987年)。J. H. Wilkinson 数值软件奖是以他的名字命名的。

JAMES HARDY ("JIM") WILKINSON DL作者简介链接
联合王国 - 1970年
褒奖
由于他在数值分析方面的研究促进了高速数字计算机的使用,他在线性代数和 "反向 "误差分析方面的计算工作得到了特别认可。

简短注释
书目
亚马逊图灵奖
讲座
研究
主题
额外的
材料


詹姆斯-哈迪-威尔金森(James (Jim) Hardy Wilkinson)是一位英国数学家,他成为二战后出现的一个新的、重要的领域中的主要专家。它有两个名字:矩阵计算和(更浮夸的)数字线性代数。

吉姆来自一个从事乳品生意的卑微家庭,是五个孩子中的第三个。他在年轻时就对数学问题表现出极大的兴趣,并在11岁时赢得了约瑟夫-威廉姆森爵士数学学校的基金会奖学金--这是很幸运的事情,因为家族企业陷入困境,他们不可能支付这样一所精英学校的学费。

1939年,威尔金森从剑桥大学三一学院毕业,该学院是英国的数学圣地,在经历了辉煌的本科生涯后,他毕业了。他被期望成为一名纯粹的分析师,并继续在剑桥的研究生工作,但第二次世界大战的爆发终止了这个计划。他开始在英国政府供应部工作,从事爆炸热力学、弹道学和类似的军事兴趣等课题的研究。战争期间的这项工作从根本上改变了他的重点。

战争结束后,他没有回到剑桥,而是加入了国家物理实验室(NPL)的工作人员,与阿兰-图灵一起研究与图灵提出的建造电子计算机有关的问题。当图灵离开NPL去了曼彻斯特大学后,威尔金森(和其他人)接手了图灵的计算机--Pilot ACE(1950年投入使用)的开发。

这篇文章将试图以尽可能简单的方式解释(尽管一些数学知识可能会有帮助)这个矩阵计算领域的关注,以及为什么威尔金森在1970年获得ACM A. W. 图灵奖。

如果没有快速数字计算机的出现,这个领域就不会存在。尽管少数古典数学家,如卡尔-弗里德里希-高斯(Carl Friedrich Gauss, 1777-1855)和卡尔-雅各比(Karl G. J. Jacobi, 1804-1851),曾用手从事过矩阵计算(大力士的劳动),但当时并没有共同的研究领域。

让我们考虑一下这些电子计算机器。

人们对它们执行加法和乘法等原始算术运算的惊人速度感到震惊。然而,这种速度的实现是有代价的,几乎每一个答案都是错误的 没有多少人意识到这个特性。这并不是由于近似的困难造成的。它是由于一条铁律:所有在计算机中使用的数字都应具有固定的数字数。更准确地说,我们谈论的数字通常是有两部分的浮点数,即分数和指数。我们在这里关注的是小数部分。指数也受到限制,这可能会导致溢出或下溢的情况(数字变得过大或过小,无法在机器中表示),这很麻烦,但不是我们在这篇文章中关注的问题。为了了解对小数部分的限制,可以考虑将它们限制在16位(这个数字会因使用的计算机而不同)。在威尔金森的第一台机器中,这个数字是9位。这对于实际用途来说似乎已经很慷慨了,甚至是太慷慨了,对于许多计算来说,这都是多余的。然而,两个16位数的分数的乘积需要32位数来表示,而计算机总是扔掉最后的(最不重要的)16位。误差,更准确地说,是相对误差,是微不足道的。对于涉及物理量的计算,如压力或速度,给定的精度通常是足够的,但在天文学等应用中,它可能是不够的。这些微不足道的误差被恰当地称为舍入误差,计算机对所有算术运算的结果进行了 "舍入"。对这些误差的影响的研究被称为舍入误差分析。

用一个简单的手持式计算器可以得到一个关于舍入误差的灾难性影响的戏剧性演示。输入一个2到9之间的正整数(比如5)。按大量的平方根键(20到40次,取决于计算器的复杂程度),然后再按同样次数的平方键。原有的整数将以精确的算术方式返回,但计算器通常会(取决于每次计算后保留多少小数位)返回1或5的近似值(如4.9998)来代替。这些差异是由于累积的舍入误差造成的。不涉及减法,每个算术错误在相对意义上都是很小的,但如果执行足够多的操作,它们就会累积起来。

威尔金森开创了,但并没有发明,对他那个时代的所有矩阵算法进行成功的误差分析。稍后会有更多关于这个主题的内容。在离开对舍入误差的描述之前,应该强调的是,在大多数科学和工程计算中,舍入误差只是输出误差的一个来源,而且是一个次要的。其他类型的误差是数据的不确定性,近似误差,以及处理导数时的离散误差。然而,矩阵计算的不同之处在于,对于所谓的直接方法,舍入误差是唯一的误差来源。

矩阵计算的另一个显著特点是,它处理的对象不是自然物理量,如压力或温度。定子和多项式是野生的。人们只是逐渐意识到,除非评估点非常接近多项式的零点,否则1000度以上的多项式的值会在计算机中溢出。指数范围大的数字非常容易出现。

有了这些计算机和舍入误差的背景,让我们回到使用这些新设备的先驱们的状态。早在计算机时代之前,科学家们就已经通过差分商数来接近导数了。这些细节并不重要,但进行这些近似的巨大回报是,微分方程(困难)变成了差分方程,这些是代数方程的变相形式。当原始微分方程是线性的时候,这些差分方程变成了线性代数方程系统(通常被认为是容易的)。在现代语言中,一组名为x1,x2,...的未知数的线性代数方程被称为矩阵方程,写成Ax=b,其中向量b表示右手边,向量x表示未知数,A是一个矩阵。就我们的目的而言,矩阵是一个矩形的数字阵列。没有接触过矩阵的读者应该告诉自己,它们是一项重要的符号发明,可以将大量的细节压缩在几个符号中。这段话的目的是为了暗示一系列不同的科学和工程问题如何归结为解决一些标准的矩阵问题。矩阵计算几乎从未因其本身而受到关注--它们通常作为一个更大的过程的中间步骤出现。

所有参与建设自动数字计算机的人都立即意识到,如果这些机器能够解决有数百个未知数的方程组,那么工程模型就可以变得更加真实,对技术有巨大的好处。自20世纪30年代以来,解决Ax=b的直接方法已经在人类的手持计算设备中得到了完善,将这些方法自动化似乎不是一个太大的挑战。地平线上唯一的乌云是所有这些舍入误差的威胁。它们会不会破坏新设备带来的希望?最有实力的数学家(如美国的约翰-冯-诺伊曼和英国的阿兰-图灵),认真思考了这些问题。请记住,从来没有人用直接方法解决过50个未知数的50个方程。这里是未知的领域。一位非常资深的统计学家Harold T. Hotelling提出了一个非常悲观的分析;似乎错误会以指数的速度累积。图灵是悲观的。在美国,冯-诺伊曼和赫尔曼-H-戈德斯廷提出了一个开创性的分析和一个聪明但复杂的解决方案。用技术术语来说,他们建议解决正常方程而不是Ax=b。知道什么是正常方程并不重要,但对于那些接触过矩阵理论的人来说,它们是A'Ax=A'b,其中A'是A的转置。

此时,威尔金森已是国家物理实验室艾伦-图灵的首席助理,该实验室负责建造英国最早的自动通用数字计算机之一,即Pilot ACE(自动计算引擎),当然,他们也对它最初应处理的问题进行了大量思考。事实上,威尔金森为Pilot ACE设计(并建造)了乘法单元。并在机器建成之前就通过编写数字程序进行实验(试图看看会出现什么问题)。

与冯-诺伊曼和图灵相比,威尔金森有一个很大的优势。在二战期间,他曾被迫用手摇式机械计算器解决18个未知数中的18个方程。他看到了直接方法是多么惊人的精确。那次艰苦的练习帮助威尔金森在考虑那些麻烦的四舍五入误差时,采用了与其他专家不同的思考方式:不要再纠结于最终输出的误差有多大,而是要问,数据可以做多大的改变,使输出完全正确?这是一个富有成效的问题。

威尔金森做出重大贡献的另一项任务是特征值问题。这里的矩阵A是正方形的(行数和列数相同),任务是找到那些少数(非常特殊)的值e,使A-eI不能倒置。这里I是身份矩阵,其特殊属性是Iv=v,适用于所有向量v。从形式上看,这个问题与寻找给定多项式的零点的老问题有关。威尔金森表明,将特征值问题简化为寻找其特征多项式的零点的问题是非常不明智的,尽管这种简化是数学家的自然做法。一旦掌握了原因就很简单了;多项式的零点几乎总是对多项式的系数的微小变化极为敏感。事实上,自动计算设备的出现对数学和工程的一大影响是将扰动理论的重要性提升到矩阵计算领域的一个基本特征。解决矩阵特征值问题的算法的发明是一个有趣的话题,但对这篇文章来说有点过于技术性。威尔金森在其前三十年(1950-1980年)指导其发展。一个关键的想法是改变给定的矩阵,使其更接近三角形,而不改变特征值。这一要求严重制约了可以对给定矩阵进行的改变。

解决Ax=b的直接方法的最大对手是迭代方法。这些方法由理查德-索斯韦尔在一战期间成功开发,其目的不是为了得到准确的答案,而是为了在每一步改进一个近似的解决方案。这个过程一直持续到近似值z被认为是令人满意的,因为其残差b-Az小到足以满足用户的需要。通常这些需求是适度的,3个正确的小数位就足够了。所以专家们倾向于把钱放在这些迭代方法上。迭代方法的严重缺陷是,它们不适用于一般情况;它们需要矩阵A的特殊属性。

20世纪50年代是探索的时代。在这十年后期编译器出现之前,编程是一个折磨人的过程。然而,威尔金森能够在Pilot ACE上对各种矩阵问题尝试各种方法,并学到非常有价值的经验。

在这篇文章的结尾,我们可以说威尔金森赢得了图灵奖,因为他证明了专家们的担心是没有根据的,他准确地理解了四舍五入误差在矩阵计算中的作用,并展示了一种使这一切看起来相当容易的方法。他的工具是所谓的后向误差分析。不要把注意力集中在误差的大小上,而是要找到计算出的解决方案所能准确解决的问题! 对于Ax=b问题,让计算机输出为z。寻找一个小矩阵E和一个小向量e,使(A+E)z=b+e。毕竟A和b中的条目很可能是不确定的。如果误差仍然很大,那么问题本身,即这对(A,b),一定是接近于无解的。

事实上,威尔金森所做的比上面指出的要多得多。他意识到,一种方法的实施方式往往存在着微妙但重要的差异。例如,一个向量的条目的平方相加的顺序(自上而下或自下而上)会对计算出的向量的规范产生影响。

这里有一个非常简单的例子,这个问题在精确算术中并不重要,但在使用计算机时却会产生很大的不同:对于数字a和b,在精确算术中a2-b2=(a-b)(a+b),但在计算机中却不是。例如,如果a和b的数值非常接近,那么,计算机进行运算(a-b)时,可能会将结果计算为零(因为计算机只能存储每个数字的有限数量)。这将导致上述方程的右边是(0)(a+b),而这个方程总是零。另一方面,先计算a2,再计算b2,然后做减法,很可能得出一个非零值。在编程时应该使用哪种形式?

在他退休时,吉姆已被授予英国公务员系统内的特等首席科学官的职衔,这种职衔很少被使用,它允许一个人继续进行研究而不必担心其他职责。

除了他的技术专长外,威尔金森还是一位优秀的、具有非正式风格的论述者。

1945年,吉姆与希瑟-诺拉-瓦尔(也是一位数学家)结婚,他们有一个儿子和一个女儿。

作者:。贝尔斯福德-尼尔-帕莱特
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-5-2 05:33 , Processed in 0.173699 second(s), 19 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表