200分高分悬赏急用英译汉(明天早上之前要)

200\u5206\u9ad8\u5206\u60ac\u8d4f\u6025\u7528\u82f1\u8bd1\u6c49(\u660e\u5929\u65e9\u4e0a\u4e4b\u524d\u8981)

\u90fd\u7ed9\u4f60\u7ffb\u8bd1\u51fa\u6765\u4e86
\u0398-notation
In Chapter 2, we found that the worst-case running time of insertion sort is T (n) = \u0398(n2). Let
us define what this notation means. For a given function g(n), we denote by \u0398(g(n)) the set of
functions
\u0398(g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that 0 \u2264 c1g(n) \u2264 f(n) \u2264 c2g(n)
for all n \u2265 n0}.[1]
A function f(n) belongs to the set \u0398(g(n)) if there exist positive constants c1 and c2 such that it
can be "sandwiched" between c1g(n) and c2g(n), for sufficiently large n. Because \u0398(g(n)) is a
set, we could write "f(n) \u001f \u0398(g(n))" to indicate that f(n) is a member of \u0398(g(n)). Instead, we
will usually write "f(n) = \u0398(g(n))" to express the same notion. This abuse of equality to denote
set membership may at first appear confusing, but we shall see later in this section that it has
advantages.
\u03b8-\u7b26\u53f7
\u5728\u7b2c2\u7ae0\u4e2d,\u6211\u4eec\u770b\u5230\u7684\u6700\u574f\u7684\u8fd0\u884c\u6392\u5e8f\u662fT(n)=\u03b8(n2) . \u8ba9\u6211\u4eec\u786e\u5b9a\u8fd9\u4e2a\u516c\u5f0f\u4ee3\u8868\u4e86\u4ec0\u4e48. \u5bf9\u4e8e\u7ed9\u5b9a\u51fd\u6570G(n),\u6211\u4eec\u5b9a\u4e49\u03b8(g(n))\u4e3ag(n)\u7684\u51fd\u6570\u96c6\u3002

\u0398(g(n))=\uff5bf(n): \u5b58\u5728\u6b63\u5e38\u6570C1\u3001C2\u548cN0\uff0c\u5f530\u2264c1g(n)\u2264f(n)\u2264c2g(n),\u5176\u4e2dn\u2265 n0} .

[1]\u82e5\u5b58\u5728\u6b63\u5e38\u6570C1\u3001C2\u7b49\u4ee5\u4f7f\u4e4b\u6ee1\u8db3c1g(n)\u2264f(n)\u2264c2g(n),\u5982\u679cn\u8db3\u591f\u5927\uff0c\u90a3\u4e48\u51fd\u6570f(n)\u5c5e\u4e8e\u51fd\u6570\u96c6\u03b8(g(n))\u3002\u56e0\u4e3a\u03b8(g(n))\u662f\u4e00\u4e2a\u96c6\u5408,\u6211\u4eec\u53ef\u4ee5\u7528"f(n) \u03b8(g(n))"\u6765\u8868\u660ef(n)\u5305\u542b\u4e8e\u03b8(g(n)). \u4f46\u662f,\u6211\u4eec\u901a\u5e38\u4f1a\u5199"f(n)=\u03b8(g(n))"\u6765\u8868\u8fbe\u8fd9\u4e00\u6982\u5ff5. \u8fd9\u79cd\u7b49\u53f7\u5b9a\u4e49\u7684\u6ee5\u7528\u4f7f\u5f97\u5de5\u4f5c\u7ec4\u5728\u5f00\u59cb\u65f6\u4f1a\u51fa\u73b0\u6df7\u4e71, \u4f46\u662f,\u6211\u4eec\u5e94\u5f53\u770b\u5230,\u5728\u8fd9\u4e00\u8282\u68a2\u540e,\u5b83\u6709\u8bb8\u591a\u4f18\u70b9.


Figure 3.1(a) gives an intuitive picture of functions f(n) and g(n), where we have that f(n) =
\u0398(g(n)). For all values of n to the right of n0, the value of f(n) lies at or above c1g(n) and at or
below c2g(n). In other words, for all n \u2265 n0, the function f(n) is equal to g(n) to within a
constant factor. We say that g(n) is an asymptotically tight bound for f(n).

Figure 3.1: Graphic examples of the \u0398, O, and \u03a9 notations. In each part, the value of n0
shown is the minimum possible value; any greater value would also work. (a) \u0398-notation
bounds a function to within constant factors. We write f(n) = \u0398(g(n)) if there exist positive
constants n0, c1, and c2 such that to the right of n0, the value of f(n) always lies between c1g(n)
and c2g(n) inclusive. (b) O-notation gives an upper bound for a function to within a constant
factor.


\u56fe3.1(a):\u7ed9\u51fa\u51fd\u6570f(n)\u548cg(n)\u7684\u66f2\u7ebf\uff0c\u5176\u4e2d\uff0cf(n)=\u0398(g(n))\u3002\u5f53n\u2265n0\u65f6\uff0cc1g(n)\u2264f(n)\u2264c2g(n)\u3002\u4e5f\u5c31\u662f\u8bf4\uff0c\u5f53n\u2265n0\u65f6\uff0cf(n)\u7b49\u4e8eg(n)\u4e58\u4ee5\u4e00\u4e2a\u5e38\u6570\u3002\u6211\u4eec\u8bf4\uff0cg(n)\u662ff(n)\u7684\u540c\u7c7b\u578b\u66f2\u7ebf\u3002

\u56fe3.1\uff1a\u0398, O,\u548c\u03a9\u7684\u56fe\u4f8b\u3002\u5728\u6bcf\u4e2a\u90e8\u5206\uff0cn0\u53d6\u6700\u5c0f\u53ef\u80fd\u503c\uff1b\u4efb\u4f55\u6bd4\u5b83\u5927\u7684\u503c\u90fd\u53ef\u4ee5\u3002

(a)\u0398\u9650\u5236\u4e86\u4e00\u4e2a\u548c\u5e38\u6570\u6709\u5173\u7684\u51fd\u6570\u3002\u5982\u679c\u5b58\u5728\u6b63\u5e38\u6570n0\u3001c1\u548cc2\uff0c\u5f53 n\u2265n0\u65f6\uff0cc1g(n)\u2264f(n)\u2264c2g(n)\uff0c\u90a3\u4e48\uff0c\u6211\u4eec\u8bb0\u4f5cf(n) = \u0398(g(n))\u3002

(b)\u7b26\u53f7O\u8868\u793a\u4e86\u4e00\u4e2a\u4e0e\u5e38\u6570\u6709\u5173\u7684\u51fd\u6570\u7684\u4e0a\u9650\u3002

We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of
n0, the value of f(n) always lies on or below cg(n). (c) \u03a9-notation gives a lower bound for a
function to within a constant factor. We write f(n) = \u03a9(g(n)) if there are positive constants n0
and c such that to the right of n0, the value of f(n) always lies on or above cg(n).
The definition of \u0398(g(n)) requires that every member f(n) \u001f \u0398(g(n)) be asymptotically
nonnegative, that is, that f(n) be nonnegative whenever n is sufficiently large. (An
asymptotically positive function is one that is positive for all sufficiently large n.)
Consequently, the function g(n) itself must be asymptotically nonnegative, or else the set
\u0398(g(n)) is empty. We shall therefore assume that every function used within \u0398-notation is
asymptotically nonnegative. This assumption holds for the other asymptotic notations defined
in this chapter as well.
In Chapter 2, we introduced an informal notion of \u0398-notation that amounted to throwing away
lower-order terms and ignoring the leading coefficient of the highest-order term. Let us
briefly justify this intuition by using the formal definition to show that 1/2n2 - 3n = \u0398(n2). To
do so, we must determine positive constants c1, c2, and n0 such that

c1n2 \u2264 1/2n2 - 3n \u2264 c2n2

for all n \u2265 n0. Dividing by n2 yields

c1 \u2264 1/2 - 3/n \u2264 c2.


\u5f53\u6709\u6b63\u5e38\u6570n0\u548c\u5927\u4e8en0\u7684\u5e38\u6570c\uff0c\u6211\u4eec\u5c31\u8bb0\u4f5c f(n) = O(g(n)) \uff0c\u5176\u4e2df(n)\u2264cg(n)\u3002

(c) \u03a9\u7b26\u53f7\u8868\u793a\u5728\u7ed9\u5b9a\u5e38\u6570\u8303\u56f4\u5185\u7684\u51fd\u6570\u4e0b\u9650\u3002\u5f53\u6709\u6b63\u5e38\u6570n0\u548c\u5927\u4e8en0\u7684\u5e38\u6570c\uff0c\u6211\u4eec\u8bb0\u4f5cf(n) = \u03a9(g(n)) \uff0c\u5176\u4e2df(n)\u2265cg(n)\u3002
\u0398(g(n)) \u7684\u5b9a\u4e49\u8981\u6c42\u6bcf\u4e00\u4e2a f(n) \u0398(g(n)) \u90fd\u662f\u6e10\u8fd1\u6b63\u51fd\u6570\uff0c\u5c31\u662f\u8bf4\uff0c\u65e0\u8bban\u591a\u5927\uff0cf(n) \u90fd\u4e3a\u6b63\u7684(\u6e10\u8fd1\u7684\u6b63\u51fd\u6570\u5c31\u662f\u4e00\u4e2a\u5bf9\u6240\u6709\u8db3\u591f\u5927\u7684n\u6765\u8bf4\u90fd\u4e3a\u6b63\u7684\u51fd\u6570)\uff1b\u56e0\u6b64\uff0c\u51fd\u6570 g(n) \u672c\u8eab\u5fc5\u987b\u4e3a\u6e10\u8fd1\u975e\u8d1f\u51fd\u6570\uff0c\u5426\u5219\u0398(g(n)) \u5c31\u662f\u7a7a\u96c6\u3002\u56e0\u6b64\u6211\u4eec\u5e94\u8be5\u5047\u8bbe\u0398\u7684\u6240\u6709\u5b50\u51fd\u6570\u90fd\u4e3a\u6e10\u8fd1\u975e\u8d1f\u51fd\u6570\u3002\u8fd9\u79cd\u5047\u8bbe\u5728\u8fd9\u7ae0\u4e2d\u5176\u4ed6\u7684\u6e10\u8fd1\u7b26\u53f7\u5b9a\u4e49\u4e2d\u4e5f\u6210\u7acb\u3002

\u5728\u7b2c\u4e8c\u7ae0\uff0c\u6211\u4eec\u4ecb\u7ecd\u4e86\u4e00\u4e2a\u975e\u6b63\u5f0f\u7684\u7b26\u53f7 \u0398-\u7b26\u53f7\uff0c\u8fd9\u79cd\u7b26\u53f7\u603b\u8ba1\u4e22\u5f03\u4f4e\u9879\uff0c\u5e76\u4e14\u5ffd\u7565\u9ad8\u6b21\u9879\u7684\u9996\u9879\u7cfb\u6570\u3002\u8ba9\u6211\u4eec\u7b80\u8981\u5730\u901a\u8fc7\u6b63\u5f0f\u5730\u5b9a\u4e49\u6765\u8868\u793a1/2n2 - 3n = \u0398(n2)\u7684\u65b9\u6cd5\u6765\u8bc1\u660e\u8fd9\u79cd\u76f4\u89c9\u662f\u5bf9\u7684\uff0c\u4e3a\u4e86\u8fd9\u6837\u505a\uff0c\u6211\u4eec\u5fc5\u987b\u89c4\u5b9a\u6b63\u5e38\u6570c1\uff0cc2\u548cn0\u6ee1\u8db3\uff1a
c1n2\u22641/2n2-3n\u2264c2n2
\u5bf9\u4e8e\u6240\u6709\u7684n \u2265 n0\uff0c\u7528n2\u9664\u540e\uff0c
c1\u22641/2-3/n\u2264c2.

\u8fd9\u51e0\u4e2a\u7684\u786e\u53e5\u578b\u6709\u4e9b\u957f,\u7ffb\u8bd1\u4e60\u60ef\u4e5f\u53ef\u4ee5\u53c2\u7167\u4e00\u4e0b\u522b\u4eba\u7684,\u53cd\u6b63\u4eca\u5929\u597d\u8c61\u56de\u7b54\u4f60\u7684\u4eba\u597d\u591a.
-------------------------------

overview
The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple
characterization of the algorithm's efficiency and also allows us to compare the relative
performance of alternative algorithms. Once the input size n becomes large enough, merge
sort, with its \u0398(n lg n) worst-case running time, beats insertion sort, whose worst-case running
time is \u0398(n2). Although we can sometimes determine the exact running time of an algorithm,
as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of
computing it. For large enough inputs, the multiplicative constants and lower-order terms of
an exact running time are dominated by the effects of the input size itself.

When we look at input sizes large enough to make only the order of growth of the running
time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are
concerned with how the running time of an algorithm increases with the size of the input in
the limit, as the size of the input increases without bound. Usually, an algorithm that is
asymptotically more efficient will be the best choice for all but very small inputs.

\u6982\u8981

\u7b97\u6cd5\u8fd0\u884c\u65f6\u95f4\u7684\u6210\u957f\u6b21\u5e8f, \u88ab\u5b9a\u4e49\u5728\u7b2c\u4e8c\u7ae0\u91cc, \u7ed9\u7b97\u6cd5\u7684\u7279\u6027\u4e00\u4e2a\u7b80\u5355\u7684\u63cf\u8ff0\u5e76\u5141\u8bb8\u6211\u4eec\u628a\u5b83\u4e0e\u4f9b\u9009\u62e9\u7684\u7b97\u6cd5\u8fdb\u884c\u6bd4\u8f83\u3002\u4e00\u65e6\u8f93\u5165\u5927\u5c0fn\u53d8\u5f97\u8db3\u591f\u5927, \u5408\u5e76\u6392\u5e8f\u6cd5, \u4ee5\u0398(nlgn) \u6700\u5dee\u8fd0\u884c\u65f6\u95f4,\u6765\u68c0\u9a8c\u6700\u5dee\u8fd0\u884c\u65f6\u95f4\u662f\u0398(n2)\u7684\u63d2\u5165\u6392\u5e8f\u3002\u867d\u7136\u6211\u4eec\u6709\u65f6\u80fd\u786e\u5b9a\u7b97\u6cd5\u786e\u5207\u7684\u8fd0\u884c\u65f6\u95f4, \u4f46\u5982\u540c\u5728\u7b2c\u4e8c\u7ae0\u91cc\u6211\u4eec\u4e3a\u4e86\u63d2\u5165\u6392\u5e8f,\u800c\u52aa\u529b\u8ba1\u7b97\u989d\u5916\u7cbe\u786e\u5ea6\u901a\u5e38\u5e76\u6ca1\u6709\u4ef7\u503c\u3002\u4e3a\u4e86\u8db3\u591f\u5927\u7684\u6295\u5165, \u5e38\u6570\u548c\u4e00\u5177\u4f53\u8fd0\u884c\u65f6\u95f4\u7684\u4f4e\u79e9\u5e8f\u671f\u9650\u7684\u79ef\u7531\u6295\u5165\u89c4\u6a21\u672c\u8eab\u6240\u63a7\u5236.
\u5f53\u6295\u5165\u89c4\u6a21\u5927\u5230\u8db3\u591f\u786e\u5b9a\u76f8\u5173\u547d\u4ee4\u8fd0\u884c\u65f6\u95f4\u7684\u60c5\u51b5\u4e0b, \u6211\u4eec\u5e94\u8be5\u5b66\u4e60\u6e10\u8fdb\u6548\u7387\u7b97\u6cd5\u3002\u5373\uff0c\u5f53\u8f93\u5165\u5927\u5c0f\u7684\u589e\u52a0\u6ca1\u6709\u9650\u5236\uff0c\u6211\u4eec\u5c31\u8981\u8003\u8651\u600e\u6837\u8ba1\u7b97\u8fd0\u884c\u65f6\u95f4\u7684\u589e\u957f\u53d8\u5316\uff0c\u5c31\u50cf\u8f93\u5165\u5927\u5c0f\u6709\u9650\u5236\u7684\u65f6\u5019\u90a3\u6837\u3002\u901a\u5e38, \u9664\u4e86\u975e\u5e38\u5c0f\u7684\u8f93\u5165\uff0c\u6e10\u8fdb\u7b97\u6cd5\u662f\u66f4\u52a0\u9ad8\u6548\u7387\u7684\u7b97\u6cd5\uff0c\u5c06\u662f\u6700\u4f73\u7684\u9009\u62e9\u3002


This chapter gives several standard methods for simplifying the asymptotic analysis of
algorithms. The next section begins by defining several types of "asymptotic notation," of
which we have already seen an example in \u0398-notation. Several notational conventions used
throughout this book are then presented, and finally we review the behavior of functions that
commonly arise in the analysis of algorithms.


\u672c\u7ae0\u8bb2\u8ff0\u4e86\u7b97\u6cd5\u6e10\u8fdb\u5206\u6790\u7684\u51e0\u79cd\u7b80\u5316\u65b9\u6cd5\u3002\u4e0b\u4e00\u90e8\u5206\u5c06\u9996\u5148\u5b9a\u4e49\u51e0\u79cd\u201c\u6e10\u8fdb\u7b26\u53f7\u201d\uff0c\u5728\u8fd9\u4e9b\u7b26\u53f7\u4e2d\u6211\u4eec\u5df2\u7ecf\u89c1\u8fc7\u7684\u5982\u0398\u7b26\u53f7\u3002\u8fd8\u5c06\u7ed9\u51fa\u672c\u4e66\u4e2d\u901a\u7bc7\u51fa\u73b0\u7684\u51e0\u79cd\u7b26\u53f7\u7ea6\u5b9a\u3002\u6700\u540e\u6211\u4eec\u5c06\u91cd\u6e29\u4e00\u4e0b\u51fa\u73b0\u5728\u7b97\u6cd5\u5206\u6790\u4e2d\u7684\u5e38\u7528\u51fd\u6570\u7684\u7279\u6027\u3002



3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in
terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}.
Such notations are convenient for describing the worst-case running-time function T (n), which is
usually defined only on integer input sizes.
It is sometimes convenient, however, to abuse asymptotic notation in a variety of ways.

For example, the notation is easily extended to the domain of real numbers or, alternatively, restricted to a subset of the natural numbers. It is important, however, to understand the precise meaning of the notation so that when it is
abused, it is not misused. This section defines the basic asymptotic notations and also
introduces some common abuses.



3.1 \u6e10\u8fdb\u6cd5

\u6211\u4eec\u7528\u6765\u63cf\u8ff0\u8fde\u7eed\u8fd0\u884c\u65f6\u95f4\u7b97\u6cd5\u7684\u4e00\u79cd\u8bb0\u6570\u6cd5\uff0c\u5176\u5b9a\u4e49\u662f\uff1a\u51fd\u6570\u9879\u7684\u57df\u503c\u4e3a\u81ea\u7136\u6570N={0\uff0c1\uff0c2\uff0c\u2026}\u3002
\u8fd9\u79cd\u8bb0\u6570\u6cd5\u5728\u63cf\u8ff0\u6700\u7b80\u5355\u7684\u8fde\u7eed\u65f6\u95f4\u51fd\u6570T(n)\u662f\u65b9\u4fbf\u7684\uff0c\u56e0\u4e3a\u5b83\u7684\u8f93\u5165\u8303\u56f4\u901a\u5e38\u4ec5\u53d6\u6574\u6570\u3002\u5b83\u6709\u65f6\u662f\u65b9\u4fbf\u7684\uff0c\u7136\u800c\uff0c\u6211\u4eec\u9700\u8981\u7528\u6bd4\u8f83\u591a\u7684\u65b9\u5f0f\u6765\u63cf\u8ff0\u4e0d\u89c4\u5219\u6e10\u8fdb\u8bb0\u6cd5\u3002\u4f8b\u5982\uff0c\u8bb0\u6cd5\u5f88\u5bb9\u6613\u5728\u5b9e\u6570\u57df\u5185\u53d6\u503c\uff0c\u6216\u8005\u4ea4\u66ff\u5730\u9650\u5b9a\u5728\u81ea\u7136\u6570\u7684\u4e00\u4e2a\u5b50\u96c6\u5185\u3002\u5c3d\u7ba1\u8bb0\u6570\u6cd5\u91cd\u8981\uff0c\u4f46\u6211\u4eec\u9700\u8981\u4e86\u89e3\u5176\u7cbe\u786e\u6db5\u4e49\uff0c\u4ee5\u4fbf\u51fd\u6570\u4e0d\u89c4\u5219\u65f6\u5b83\u6ca1\u6709\u88ab\u8bef\u7528\u3002\u672c\u8282\u5b9a\u4e49\u4e86\u57fa\u672c\u7684\u6e10\u8fdb\u8bb0\u6cd5\u5e76\u4e14\u4ecb\u7ecd\u4e00\u4e9b\u4e00\u822c\u7684\u4e0d\u89c4\u5219\u51fd\u6570\u3002

这几个的确句型有些长,翻译习惯也可以参照一下别人的,反正今天好象回答你的人好多.
-------------------------------

overview
The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple
characterization of the algorithm's efficiency and also allows us to compare the relative
performance of alternative algorithms. Once the input size n becomes large enough, merge
sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running
time is Θ(n2). Although we can sometimes determine the exact running time of an algorithm,
as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of
computing it. For large enough inputs, the multiplicative constants and lower-order terms of
an exact running time are dominated by the effects of the input size itself.

When we look at input sizes large enough to make only the order of growth of the running
time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are
concerned with how the running time of an algorithm increases with the size of the input in
the limit, as the size of the input increases without bound. Usually, an algorithm that is
asymptotically more efficient will be the best choice for all but very small inputs.

概要

算法运行时间的成长次序, 被定义在第二章里, 给算法的特性一个简单的描述并允许我们把它与供选择的算法进行比较。一旦输入大小n变得足够大, 合并排序法, 以Θ(nlgn) 最差运行时间,来检验最差运行时间是Θ(n2)的插入排序。虽然我们有时能确定算法确切的运行时间, 但如同在第二章里我们为了插入排序,而努力计算额外精确度通常并没有价值。为了足够大的投入, 常数和一具体运行时间的低秩序期限的积由投入规模本身所控制.
当投入规模大到足够确定相关命令运行时间的情况下, 我们应该学习渐进效率算法。即,当输入大小的增加没有限制,我们就要考虑怎样计算运行时间的增长变化,就像输入大小有限制的时候那样。通常, 除了非常小的输入,渐进算法是更加高效率的算法,将是最佳的选择。

This chapter gives several standard methods for simplifying the asymptotic analysis of
algorithms. The next section begins by defining several types of "asymptotic notation," of
which we have already seen an example in Θ-notation. Several notational conventions used
throughout this book are then presented, and finally we review the behavior of functions that
commonly arise in the analysis of algorithms.

本章讲述了算法渐进分析的几种简化方法。下一部分将首先定义几种“渐进符号”,在这些符号中我们已经见过的如Θ符号。还将给出本书中通篇出现的几种符号约定。最后我们将重温一下出现在算法分析中的常用函数的特性。

3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in
terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}.
Such notations are convenient for describing the worst-case running-time function T (n), which is
usually defined only on integer input sizes.
It is sometimes convenient, however, to abuse asymptotic notation in a variety of ways.

For example, the notation is easily extended to the domain of real numbers or, alternatively, restricted to a subset of the natural numbers. It is important, however, to understand the precise meaning of the notation so that when it is
abused, it is not misused. This section defines the basic asymptotic notations and also
introduces some common abuses.

3.1 渐进法

我们用来描述连续运行时间算法的一种记数法,其定义是:函数项的域值为自然数N={0,1,2,…}。
这种记数法在描述最简单的连续时间函数T(n)是方便的,因为它的输入范围通常仅取整数。它有时是方便的,然而,我们需要用比较多的方式来描述不规则渐进记法。例如,记法很容易在实数域内取值,或者交替地限定在自然数的一个子集内。尽管记数法重要,但我们需要了解其精确涵义,以便函数不规则时它没有被误用。本节定义了基本的渐进记法并且介绍一些一般的不规则函数。

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其θ(氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间θ(氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当我们看投入规模大到足以使只为了增长的运行时间有关, 我们正在研究的渐近效率的算法. 即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其? ? (氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间是什么? ? (氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当我们看投入规模大到足以使只为了增长的运行时间有关, 我们正在研究的渐近效率的算法. 即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.

大概就是这样吧,我才初二,译得不好别怪我
回答者:litianqing123 - 千总 四级 6-6 00:39

初二,太强,我这个研究生也不好意思再翻译了。
回答者:laifu886 - 助理 三级 6-6 00:52

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其θ(氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间θ(氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当我们看投入规模大到足以使只为了增长的运行时间有关, 我们正在研究的渐近效率的算法. 即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.
回答者:xya248 - 高级经理 七级 6-6 01:00

overview
The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple
characterization of the algorithm's efficiency and also allows us to compare the relative
performance of alternative algorithms. Once the input size n becomes large enough, merge
sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running
time is Θ(n2). Although we can sometimes determine the exact running time of an algorithm,
as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of
computing it. For large enough inputs, the multiplicative constants and lower-order terms of
an exact running time are dominated by the effects of the input size itself.
When we look at input sizes large enough to make only the order of growth of the running
time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are
concerned with how the running time of an algorithm increases with the size of the input in
the limit, as the size of the input increases without bound. Usually, an algorithm that is
asymptotically more efficient will be the best choice for all but very small inputs.

翻译过来是:

概况秩序增长的运行时间算法,确定第2章, 给出一个简单

的表征算法的效率,也使我们能比较的相对表现替代算法.

一旦投入量n变得足够大,合并排序,其θ(氮LG集团n )的最

坏情况运行时间, 节拍插入排序,其最坏情况的运行时间θ

(氮气) . 虽然我们有时候确定确切的运行时间的算法, 正

如我们对插入排序,在第2章,次 电子精密额外通常不值得

努力的计算. 供足够大的投入, 乘法常数和较低阶的精确

运行时间的支配作用的投入规模 本身. 当我们看投入规模

大到足以使只为了增长的运行时间有关, 我们正在研究的

渐近效率的算法. 即 我们所关注的是如何运行时间的增

加,算法与庞大的投入的影响 限制,因为人数的增加投入,

没有约束. 通常,一个算法是渐近更有效率将会是最佳的选

择,但很少投入.
回答者:yue23yue23 - 助理 二级 6-6 03:32

没有基础知识 翻译的肯定是不行的哇~~
回答者:peachmianmian - 助理 三级 6-6 09:17

概要算法的运行时间的成长次序, 被定义在章节2 里, 给算法的效率的一个简单的描述特性和并且允许我们比较供选择的算法相对表现。一旦输入大小n 变得足够大, 合并排序法, 以它的(n lg n) 最坏的运行时间, 摔打插入排序, 最坏的运行时间是(n2) 。虽然我们能有时确定算法的确切的运行时间, 如同在章节2里我们做了为插入排序, 努力计算额外精确度通常不是价值。为了足够的输入, 确切的运行时间乘常数和低秩序期限由输入大小的作用控制。
当我们看输入估量足够大与唯一命令运行时间有关, 我们学习渐进效率算法。那是我们牵涉到当输入的大小增加没有区域,怎样算运行时间增加以输入的大小在极限中。通常, 除了非常小输入,渐进算法是更加高效率的算法,将是最佳的选择。

++++分
回答者:xmaipp1314 - 助理 二级 6-6 09:40

不是吧,大家都这么谦虚啊。弄的我都没脸见人了!~~~~~~给我加分哦,呵呵。再加个好友~~~~~~

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其θ(氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间θ(氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当我们看投入规模大到足以使只为了增长的运行时间有关, 我们正在研究的渐近效率的算法. 即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.
回答者:ylzqgw - 见习魔法师 二级 6-6 09:47

概况秩序增长的运行时间算法给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间是什么 . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入,我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.
回答者:咔〃〃咔 - 见习魔法师 二级 6-6 10:54

概要
算法的运行时间的成长次序, 被定义在章节2 里, 给一简单 algorithm's 效率的描述特性和并且允许我们用比较性的算法表现。一旦输入大小n 变得足够大, 合并 排序, 以它的?(n lg n) 最坏的运行时间, 敲打插入排序, 最坏的赛跑 时间是?(n2) 。虽然我们能有时确定算法的确切的运行时间, 如同我们做了为插入排序在章节2 里, 额外精确度通常不是价值努力 计算它。为大足够输入, 乘常数和低秩序期限 确切的运行时间由输入大小的作用控制。 当我们看输入估量足够大做唯一命令赛跑的成长 计时相关, 我们学习算法渐进效率。那是我们是 与怎样有关算法的运行时间增加以输入的大小 极限, 当输入的大小增加没有区域。通常,是的算法 更加高效率渐进地将是最佳的选择为所有除了非常小输入

好难好难好难啊!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
回答者:匿名 6-6 11:46

这几个的确句型有些长,翻译习惯也可以参照一下别人的,反正今天好象回答你的人好多.
-------------------------------

overview
The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple
characterization of the algorithm's efficiency and also allows us to compare the relative
performance of alternative algorithms. Once the input size n becomes large enough, merge
sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running
time is Θ(n2). Although we can sometimes determine the exact running time of an algorithm,
as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of
computing it. For large enough inputs, the multiplicative constants and lower-order terms of
an exact running time are dominated by the effects of the input size itself.

When we look at input sizes large enough to make only the order of growth of the running
time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are
concerned with how the running time of an algorithm increases with the size of the input in
the limit, as the size of the input increases without bound. Usually, an algorithm that is
asymptotically more efficient will be the best choice for all but very small inputs.

概要

算法运行时间的成长次序, 被定义在第二章里, 给算法的特性一个简单的描述并允许我们把它与供选择的算法进行比较。一旦输入大小n变得足够大, 合并排序法, 以Θ(nlgn) 最差运行时间,来检验最差运行时间是Θ(n2)的插入排序。虽然我们有时能确定算法确切的运行时间, 但如同在第二章里我们为了插入排序,而努力计算额外精确度通常并没有价值。为了足够大的投入, 常数和一具体运行时间的低秩序期限的积由投入规模本身所控制.
当投入规模大到足够确定相关命令运行时间的情况下, 我们应该学习渐进效率算法。即,当输入大小的增加没有限制,我们就要考虑怎样计算运行时间的增长变化,就像输入大小有限制的时候那样。通常, 除了非常小的输入,渐进算法是更加高效率的算法,将是最佳的选择。

This chapter gives several standard methods for simplifying the asymptotic analysis of
algorithms. The next section begins by defining several types of "asymptotic notation," of
which we have already seen an example in Θ-notation. Several notational conventions used
throughout this book are then presented, and finally we review the behavior of functions that
commonly arise in the analysis of algorithms.

本章讲述了算法渐进分析的几种简化方法。下一部分将首先定义几种“渐进符号”,在这些符号中我们已经见过的如Θ符号。还将给出本书中通篇出现的几种符号约定。最后我们将重温一下出现在算法分析中的常用函数的特性。

3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in
terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}.
Such notations are convenient for describing the worst-case running-time function T (n), which is
usually defined only on integer input sizes.
It is sometimes convenient, however, to abuse asymptotic notation in a variety of ways.

For example, the notation is easily extended to the domain of real numbers or, alternatively, restricted to a subset of the natural numbers. It is important, however, to understand the precise meaning of the notation so that when it is
abused, it is not misused. This section defines the basic asymptotic notations and also
introduces some common abuses.

3.1 渐进法

我们用来描述连续运行时间算法的一种记数法,其定义是:函数项的域值为自然数N={0,1,2,…}。
这种记数法在描述最简单的连续时间函数T(n)是方便的,因为它的输入范围通常仅取整数。它有时是方便的,然而,我们需要用比较多的方式来描述不规则渐进记法。例如,记法很容易在实数域内取值,或者交替地限定在自然数的一个子集内。尽管记数法重要,但我们需要了解其精确涵义,以便函数不规则时它没有被误用。本节定义了基本的渐进记法并且介绍一些一般的不规则函数。
回答者:tonrry - 高级魔法师 七级 6-6 14:43

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其θ(氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间θ(氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.
这股不难,以后好好学,就这样,学着点。
回答者:糖果果12 - 试用期 一级 6-6 16:03

水平有限,但绝对是手译的。

在第二章定义的一个算法运行时间的增长顺序,给出了对算法效率的简单描述, 并且使我们能够比较相关算法的性能. 当输入的足够多的元素时,混合排序的最坏情况的运行时间( Θ(n lg n))就要比插入排序最坏情况的运行时间(Θ(n2))要好. 尽管我们有时可以决定一个算法准确的运行时间,就像第二章对插入排序讨论的那样, 但是精确度通常不值得那样计算. 对大的输入而言, 准确运行时间的倍数常量和低位数就由输入的规模决定了.
当输入的量很大时,大到能使运行时间增长到一定程度时, 我们研究算法的渐近效率.也就是说,我们关心的是怎样让算法的运行时间在输入规模的限制下增加,就像输入规模在无限制的情况下增加. 通常,一个更加有效率的渐近算法是当输入规模很小时是最佳选择.

概要算法的运行时间的成长次序, 被定义在章节2 里, 给算法的效率的一个简单的描述特性和并且允许我们比较供选择的算法相对表现。一旦输入大小n 变得足够大, 合并排序法, 以它的(n lg n) 最坏的运行时间, 摔打插入排序, 最坏的运行时间是(n2) 。虽然我们能有时确定算法的确切的运行时间, 如同在章节2里我们做了为插入排序, 努力计算额外精确度通常不是价值。为了足够的输入, 确切的运行时间乘常数和低秩序期限由输入大小的作用控制。
当我们看输入估量足够大与唯一命令运行时间有关, 我们学习渐进效率算法。那是我们牵涉到当输入的大小增加没有区域,怎样算运行时间增加以输入的大小在极限中。通常, 除了非常小输入,渐进算法是更加高效率的算法,将是最佳的选择。

++++分

概况秩序增长的运行时间算法,确定第2章, 给出一个简单的表征算法的效率,也使我们能比较的相对表现替代算法. 一旦投入量n变得足够大,合并排序,其? ? (氮LG集团n )的最坏情况运行时间, 节拍插入排序,其最坏情况的运行时间是什么? ? (氮气) . 虽然我们有时候确定确切的运行时间的算法, 正如我们对插入排序,在第2章,次 电子精密额外通常不值得努力的计算. 供足够大的投入, 乘法常数和较低阶的精确运行时间的支配作用的投入规模 本身. 当我们看投入规模大到足以使只为了增长的运行时间有关, 我们正在研究的渐近效率的算法. 即 我们所关注的是如何运行时间的增加,算法与庞大的投入的影响 限制,因为人数的增加投入,没有约束. 通常,一个算法是渐近更有效率将会是最佳的选择,但很少投入.

大概就是这样吧,我才初二,译得不好别怪我

水平有限,但绝对是手译的。

在第二章定义的一个算法运行时间的增长顺序,给出了对算法效率的简单描述, 并且使我们能够比较相关算法的性能. 当输入的足够多的元素时,混合排序的最坏情况的运行时间( Θ(n lg n))就要比插入排序最坏情况的运行时间(Θ(n2))要好. 尽管我们有时可以决定一个算法准确的运行时间,就像第二章对插入排序讨论的那样, 但是精确度通常不值得那样计算. 对大的输入而言, 准确运行时间的倍数常量和低位数就由输入的规模决定了.
当输入的量很大时,大到能使运行时间增长到一定程度时, 我们研究算法的渐近效率.也就是说,我们关心的是怎样让算法的运行时间在输入规模的限制下增加,就像输入规模在无限制的情况下增加. 通常,一个更加有效率的渐近算法是当输入规模很小时是最佳选择.

  • 楂樺垎鎮祻涓璇戣嫳,涓嶈鐢ㄦ満鍣ㄥ晩,璋㈣阿鍚勪綅澶уぇ甯繖銆
    绛旓細My hobbies can be called far from extentive, accurately speaking, I only like listening to music and playing basketball.鎴戠殑鐖卞ソ绉板緱涓婄骞挎硾寰堣繙-銆嬫垜鐨勭埍濂界О涓嶄笂骞挎硾锛岀畻涓嶄笂寰堝箍娉涳紝鍑嗙‘鍦拌锛屾垜鍙枩娆㈠惉闊充箰鍜屾墦绡悆銆侷 always believe that music is such a magic thing, which can no...
  • 200鍒嗛珮鍒嗘偓璧忔ョ敤鑻辫瘧姹(鏄庡ぉ鏃╀笂涔嬪墠瑕)
    绛旓細c1n2鈮1/2n2-3n鈮2n2 瀵逛簬鎵鏈夌殑n 鈮 n0锛岀敤n2闄ゅ悗锛宑1鈮1/2-3/n鈮2.
  • 鎬ユ鎬!楂樺垎鎮祻,鑻辫瘧姹,缈昏瘧鐭枃
    绛旓細and in all the moments of our existence.鏃犺浣犳槸璋,韬浣曞湴,鎴戜滑閮界殑澶勫閮戒竴妯′竴鏍. We are not at rest; we are on a journey. 鎴戜滑涓鍒讳笉姝,韬湪鏃呴.Our life is a movement,
  • 200鍒嗛珮鍒嗘偓璧忔ョ敤鑻辫瘧姹(鏄庡ぉ鏃╀笂涔嬪墠瑕)
    绛旓細overview The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple characterization of the algorithm's efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge so...
  • 200鍒嗛珮鍒嗘偓璧忔ョ敤鑻辫瘧姹(鏄庡ぉ鏃╀笂涔嬪墠瑕)
    绛旓細閮界粰浣犵炕璇戝嚭鏉ヤ簡 螛-notation In Chapter 2, we found that the worst-case running time of insertion sort is T (n) = 螛(n2). Let us define what this notation means. For a given function g(n), we denote by 螛(g(n)) the set of functions 螛(g(n)) = {f(n) : ...
  • 楂樺垎鎮祻姹夎瘧鑻!
    绛旓細1. Mrs complains about Smith to me saying , the daughter that she often discovers with self sixteen-year-old has no way to communicate with simply.2. 鎴戝潥淇★紝闃呰绠鍐欑殑锛坰implified锛夎嫳鏂囧皬璇存槸鎵╁ぇ鎴戜滑璇嶆眹閲忕殑涓绉嶈交鏉炬剦蹇殑鏂规硶銆2. I firmly believe that reading the English novel ...
  • 鑻辫瘧涓,鎬ユ鎬!!!楂樺垎鎮祻!!!
    绛旓細鍦扮悆鏄垜浠殑瀹跺洯锛屾垜浠彧鏈変竴涓湴鐞冿紝鎴戜滑搴旇鐖卞ス鐓ч【濂广備綔涓哄湴鐞冪殑濂芥湅鍙嬶紝鎴戜滑搴旇淇濇姢鍦扮悆鐨勬竻娲併備汉绫诲湪鍦扮悆涓婄敓娲讳簡鏁扮櫨涓囧勾锛岃澶氬勾鍓嶏紝鍦扮悆鏄潪甯稿共鍑鐨勩備絾鏄幇鍦ㄦ薄鏌撹秺鏉ヨ秺閲嶏紝鎴戜滑鐨勫湴鐞冭秺鏉ヨ秺鑴忎贡銆傛垜浠繀椤诲拰姹℃煋鍋氭枟浜夛紝璁╂垜浠殑鍦扮悆鏇村共鍑銆傚伐鍘傚繀椤诲湪澶勭悊姹℃按鍚庢墠鑳芥姇鏀俱傛苯杞﹀繀椤昏浣跨敤涓...
  • 楂樺垎鎮祻绠鍗曠殑姹夎瘧鑻!
    绛旓細1.鍦ㄦ垜鐨勪笘鐣岋紝浣犲緢閲嶈.In my world, you are very important.2.鍒鎴戝お闅捐繃 Don't make me too sad.3.璁板緱鍒兂浠 Don't think of him.4.鏁呬簨娌℃湁鐒跺悗 In the story, there's no "thereafter."5.鎴戜滑鐨勬湭鏉ヤ細涓嶄細寰堟负鑼,浠ヨ嚧娌℃湁涓鐐圭偣鐨勭殑甯屾湜 Will our future be too unclear,...
  • 鑻辫瘧姹,楂樺垎鎮祻!!!
    绛旓細W.J.Ball 璇寸浉鍚岀殑鍗曡瘝鍦ㄤ笉鍚岀殑鍙ュ瓙涓湁涓嶅悓鐨勬剰涔夛紝 Adam Makkai 璇翠竴浜涘崟璇嶇粍鎴愭柊鐨勭煭鍙ワ紝鏈韩宸茬粡鍏锋湁鏂扮殑鎰忎箟銆侫.P.Cowie&R.Makkin 璇翠袱涓垨浠ヤ笂鐨勫崟璇嶇粍鍦ㄤ竴璧凤紝鍙互浜х敓涓绯诲垪鐨勬剰涔夈傚鏋滀綘瀵逛繗璇潪甯镐簡瑙o紝閭d箞浣犵殑鍚姏鍜岄槄璇昏兘鍔涘皢澶уぇ鎻愰珮銆傝繖涔熸槸鎴戝皢淇氳浠嬬粛缁欎綘鐨勫師鍥犮備笅闈㈡槸涓浜...
  • 楂樺垎鎮祻鑻辨枃缈昏瘧,鍥炵瓟婊℃剰鍐嶅姞50鍒100鍒
    绛旓細decisions under risk 椋庨櫓鍐崇瓥 There is usually little assurance that predicted futures will coincide with actual futures. The 娌℃湁浠涔堣兘鎷呬繚棰勬湡鐨勬湭鏉ュ拰瀹為檯鐨勬湭鏉ョ浉涓鑷淬俻hysical and economic elements on which a course of action depends may vary from their 浜虹被涓绯诲垪琛屽姩鎵渚濊禆鐨勮嚜鐒跺拰...
  • 扩展阅读:2024最新悬赏平台 ... 魂师对决悬赏 ... 高价悬赏平台 ... 各大悬赏平台 ... 斗罗大陆世界悬赏高分阵容 ... 朱竹清世界悬赏阵容1000亿 ... 悬赏最强阵容推荐 ... 幽冥白虎悬赏阵容 ... 十大悬赏兼职平台 ...

    本站交流只代表网友个人观点,与本站立场无关
    欢迎反馈与建议,请联系电邮
    2024© 车视网