Computers:

Page 14 of 17 Previous  1 ... 8 ... 13, 14, 15, 16, 17  Next

View previous topic View next topic Go down

Monad/Comonad duality, also lazy/eager duality

Post  Shelby on Wed Jul 27, 2011 4:17 am

http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html?showComment=1311779802514#c3359419049328160234

Shelby Moore III wrote:
Monad is the model for any parametric type that we know the generative structure of, so we can compose functions on lifted outputs, because the type knows how to lift (i.e. construct, generate, 'unit' or 'return') instances of its type parameter to its structure.

Comonad is the model for any parametric type that we don't know how to generate its structure, but we can observe instances of the type parameter its structure as they occur. We will only know its final structure when it is destructed, and observation ceases. We can't lift instances of its type parameter to its structure, so we can't compose functions on outputs. Instead, we can compose functions with lifted inputs (and optionally outputs, i.e. map on observations), because the type has observations.

Conceptually monad vs. comand duality is related to the duality of induction vs. coinduction, and initial vs. final (least vs. greatest) fixpoint, because we can generate structure for a type that has an initiality, but we can only observe structure until we reach a finality.

Induction and Co-induction
Initiality and Finality
Wikipedia Coinduction

I had visited this blog page before (and not completely grasped it), then I read this page again trying conceptualize the sum vs. products duality for eager vs. lazy evaluation.

Perhaps I am in error, but it appears that with lazy evaluation and corecursion, monad can be used instead of comonad, e.g. isn't it true a stream can be abstracted by a monadic list in haskell?

So dually, am I correct to interpret that laziness isn't necessary for modeling compositionality of coinductive types, when there is comonad in the pure (referential transparent) part where the composition is?

Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.

[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics

Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]

Shelby Moore III wrote:
Followup to the two questions in my prior comment.

Monad can't abstract a comonad, because it has no method, m a -> a, for creating a new observation. A monad can abstract the history of prior observations. Afaics, for a language with multiple inheritance, a subtype of comonad could also be a subtype of monad, thus providing a monadic interface to the history of observations. This is possible because the comonad observation factory method, m a -> a, is impure (the state of the comonad blackbox changes when history is created from it).

Composition of functions, m a -> b, which input a comonad is pure (i.e. no side-effects, referentially transparent, declarative not imperative) where those functions are pure (e.g. they do not invoke m a -> a to create a new observation). In short, the method (m a -> b) -> m a -> m b is pure if m a -> b is.


Last edited by Shelby on Sat Jul 30, 2011 2:01 am; edited 7 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Call-by-need memoizes arguments, not functions

Post  Shelby on Sat Jul 30, 2011 10:07 pm

http://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.html#3700840423100518476

Shelby Moore III wrote:
@francisco: Haskell's call-by-need (lazy evaluation) memoizes function arguments, but not functions.

=====verbose explanation======

The arguments to a function are thunked, meaning the arguments get evaluated only once, and only when they are needed inside the function. This is not the same as checking if a function call was previously called with the same arguments.

If the argument is a function, the thunk will call it once without checking if that function had been called else where with the same arguments.

Thunks are conceptually similar to parameterless anonymous functions with a closure on the argument, a boolean, and a variable to store the result of the argument evaluation. Thus thunks incur no lookup costs, because they are parameterless. The cost of the thunk is the check on the boolean.

Thunks give the same amount of memoization as call-by-value (which doesn't use thunks). Neither call-by-need nor call-by-value memoize function calls. Rather both do not evaluate the same argument more than once. Call-by-need delays that evaluation with a thunk until the argument is first used within the function.

Apologies for being so verbose, but Google didn't find an explanation that made this clear, so I figured I would cover all the angles in this comment.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Eager vs. Lazy evaluation

Post  Shelby on Sun Jul 31, 2011 11:22 am

See also:

http://goldwetrust.up-with.com/t112p165-computers#4430

========================================

http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323

Shelby Moore III wrote:
Appreciated this article. Some points:

1. Lazy also has latency indeterminism (relative to the imperative world, e.g. IO monad).

2. For a compiler strategy that dynamically subdivided map for parallel execution on multiple cores requires it not be lazy.

3. any = or . map is not "wrong" for eager (strict) evaluation when there is referential transparency. It is slower in sequential time, but maybe faster in parallel execution.

4. Wikipedia says that for Big O notation, 0(n) is faster than O(n log n) = O(log n!). @Lennart, I think you meant to say that O(n) is faster than O(n log n).

5. Given that laziness causes space and latency indeterminism, if the main reason to use lazy is to avoid the performance hit for conjunctive functional composition over functors, then only functions which output applicable functors need apply laziness. As @martin (Odersky) suggested, provide lazy and eager versions of these functions. Thus eager by default with optional lazy annotations would be preferred.

Principle and 80/20 rule. 80+% of the programmers in world are not likely to grasp debugging lazy space leaks. It will only take one really difficult one to waste a work-week, and that will be the end of it. And what is the big payoff, especially with the parallelism freight train bearing down? Someone claimed that perhaps the biggest advance in mainstream languages since C, was GC (perhaps Java's main improvement over C++). Thus, if the typical programmer couldn't do ref counting without creating cycles, I don't think they will ever grasp lazy space and latency indeterminism. I am approaching this from wanting a mainstream language which can replace Java for statically typed.

Am I correct to say?

RT eager code evaluated as RT lazy could exhibit previously unseen space and latency issues.

RT lazy code evaluated as RT eager could exhibit previously unseen non-termination, e.g. infinite recursion and exceptions.

@augustss Rash judgments w/o experience is annoying. Two decades of programming, and copious reading is all I can humbly offer at this moment. This is imperfect and warrants my caution. I appreciate factual correction.

My point is that with eager, debugging the changes in the program's state machine at any function step, will be bounded to the function hierarchy inside the body of the function, so the programmer can correlate changes in the state machine to what the function is expected to do.

Whereas, with lazy any function may backtrack into functions that were in the argument hierarchy of the current function, and inside functions called an indeterminant time prior. Afaics, lazy debugging should be roughly analogous to debugging random event callbacks, and reverse engineering the state machine in a blackbox event generation module.

As I understand from Filinksi's thesis, eager and lazy are categorical duals in terms of the induction and coinductive values in the program. Eager doesn't have products (e.g. conjunctive logic, "and") and lazy doesn't have coproducts (e.g. disjunctive, "or"). So this means that lazy imposes imperative control logic incorrectness from the outside-in, because coinductive types are built from observations and their structure (coalgebra) is not known until the finality when the program ends. Whereas, eager's incorrectness is from the inside-out, because inductive types have a a priori known structure (algebra) built from an initiality. Afaics, this explains why debugging eager has a known constructive structure and debugging lazy is analogous to guessing the structure of a blackbox event callback generation module.

My main goal is to factually categorize the tradeoffs. I am open to finding the advantages of lazy. I wrote a lot more about this at my site. I might be wrong, and that is why I am here to read, share, and learn. Thanks very much.

Could you tell me why lazy (with optional eager) is better for you than referentially transparent (RT) eager (with optional lazy), other than the speed of conjunctive (products) functional composition? Your lazy binding and lazy functions points should be doable with terse lazy syntax at the let or function site only, in a well designed eager language. Infinite lazy types can be done with the optional lazy in an eager language with such optional lazy syntax. Those seem to be superficial issues with solutions, unlike the debugging indeterminism, which is fundamental and unavoidable.

I wish this post could be shorter and still convey all my points.

Idea: perhaps deforestation could someday automate the decision on which eager code paths should be evaluated lazily.

This would perhaps make our debate mute, and also perhaps provide the degrees-of-freedom to optimize the parallel and sequential execution time trade-off at runtime. I suppose this has been looked at before. I don't know if this is not possible, because I haven't studied deeply enough the research on automated deforestation.

I understand the "cheap deforestation" algorithm in Haskell only works on lazy code paths, and only those with a certain "foldr/build" structure.

Perhaps an algorithm could flatten to their bodies, the function calls in eager referentially transparent (i.e. no side-effects) code paths but in lazy order until a cyclical structure is identified, then "close the knot" on that cyclical structure. Perhaps there is some theorem that such a structure is bounded (i.e. "safe") in the transformation of coproducts to products correctness, i.e. space and latency determinism.

Your any = or . map example flattens to a cycle (loop) on each element of the functor (e.g. list), which converts each element to a boolean, exits if the result is true, and always discards the converted element in every possible code path of the inputs. That discard proves (assuming RT and thus no side-effects in map) the lazy evaluation of the eager code has no new coproducts, and thus the determinism in space and latency is not transformed.

There are discarded products (map did not complete), thus in a non-total language, some non-termination effects may be lost in the transformation. Any way, I think all exceptions should be converted to types, thus the only non-termination effects remaining are infinite recursion.

There is the added complication that in some languages (those which enforce the separation-of-concerns, interface and implementation), the map in your example may be essentially a virtual method, i.e. selected at runtime for different functors, so the deforestation might need to be a runtime optimization.

@augustss Are you conflating 'strict' with 'not referentially transparent'? I think that is a common misperception because there aren't many strict languages which are also RT. So the experience programmers have with strict languages is non-compositional due to the lack of RT. Afaik, composition degrees-of-freedom is the same for both strict and lazy, given both are RT. The only trade-off is with runtime performance, and that trade-off has pluses and minuses on both sides of the dual. Please correct me if I am wrong on that.

@augustss thanks I now understand your concern. The difference in non-termination evaluation order with runtime exceptions is irrelevant to me, because I proposed that by using types we could eliminate all exceptions (or at least eliminate the catching them from the declarative RT portion of the program). I provide an example of how to eliminate divide-by-zero at copute.com (e.g. a NonZero type that can only be instantiated from case of Signed, thus forcing the check at compile-time before calling the function that has division). A runtime exception means the program is in a random state, i.e. that the denotational semantics is (i.e. the types are) not fit to the actual semantics of the runtime. Exceptions are the antithesis of compositional regardless of the evaluation order.

In my opinion, a greater problem for extension with Haskell (and Scala, ML, etc) is afaics they allow conflation of interface and implementation. That is explained at my site. I have been rushing this (80+ hour weeks), so I may have errors.

Another problem for extension is Haskell doesn't have diamond multiple inheritance. Perhaps you've already mentioned that.

Modular Type Classes, Dreyer, Harper, Chakravarty.

@augustss agreed must insure termination for cases which are not bugs, but infinite recursion is a bug in strict, thus I excluded it for comparing compositionality. No need to go 100% total in strict to get the same compositionality as lazy. Perhaps Coq can insure against infinite recursion bug, but that is an orthogonal issue.

Diamond problem impacts compositionality, because it disables SPOT (single-point-of-truth). For example, Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord. Thus functions that would normally compose on Ord, have to compose separate on Int and IntDivisible, thus do not compose.

@augustuss if 'map pred list' doesn't terminate, it is because list is infinite. But infinite lists break parallelism and require lazy evaluation, so I don't want them as part of a default evaluation strategy. Offering a lazy keyword (i.e. type annotation or lazy type) can enable expression of those infinite constructions, but in my mind, it should discouraged, except where clarity of expression is a priority over parallelism. Did I still miss any predominant use case?

Thus, I concur with what Existential Type replied. For example, pattern match guards for a function, is runtime function overloading (splitting the function into a function for each guard), thus the compile shouldn't evaluate the functions (guard cases) that are not called.

It appears to me (see my idea in a prior comment) that lazy is a ad-hoc runtime non-deterministic approximation of deforestation. We need better automatic deforestation aware algorithms for eager compilers, so the parallelism vs. sequential execution strategy is relegated to the backend and not the in the language.


Last edited by Shelby on Thu Aug 25, 2011 1:49 am; edited 27 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Computer Assisted Learning is capital's attempt to enslave mankind

Post  Shelby on Mon Aug 01, 2011 1:28 am

I didn't write this, but I agree with it. That is not to say that I don't think some computer-assisted learning can't be useful, but rather that it must be assisted by real human teachers. This is related to the post I made some time ago, refuting the idea that computers could replace humans.

http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm

CAL Rant

The user should have control at all times, you are not forced to go through the material in any particular order and you are expected to skip the dull bits and miss those exercises which are too easy for you. You decide. The author does not believe that CAL is a good way to learn. CAL is a cheap way to learn, the best way to learn is from an interactive, multi functional, intelligent, user friendly human being. The author does not understand how it is that we can no longer afford such luxuries as human teachers in a world that is teeming with under-employed talent. His main objection to CAL is that it brings us closer to "production line" learning. The production line is an invented concept, it was invented by capital in order to better exploit labour. The production line attempts to reduce each task in the manufacturing process to something so easy and mindless that anybody can do it, preferably anything. That way the value of the labour is reduced, the worker need not be trained and the capitalist can treat the worker as a replaceable component in a larger machine. It also ensures that the workers job is dull and joyless, the worker cannot be good at his or her job because the job has been designed to be so boring that it is not possible to do it badly or well, it can merely be done quickly or slowly. Production line thinking has given us much, but nothing worth the cost. We have cheap washing machines which are programmed to self destruct after five years; cars, clothes, shoes - all of our mass produced items have built in limited life spans - this is not an incidental property of the production line, it is an inevitable consequence.
The introduction of CAL is the attempt by capital to control the educators. By allowing robots to teach we devalue the teacher and make him or her into a replaceable component of the education machine. I do not see how such a dehumanizing experience can be regarded as "efficient", the real lesson learned by students is that students are not worth speaking to, that it is a waste of resources to have a person with them. The student learns that the way to succeed is to sit quietly in front of a VDU and get on with it. The interaction is a complete sham - you may go down different paths, but only those paths that I have already thought of, you can only ask those questions which I have decided to answer. You may not challenge me while "interacting". I want students to contradict, to question, to object, to challenge, to revolt, to tear down the old and replace with the new.

Do not sit quietly and work through this material like a battery student. Work with other people, talk to them, help each other out.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

OOP in Standard ML (SML)

Post  Shelby on Fri Aug 05, 2011 4:53 pm

My prior email needs followup clarification.

From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.

Code:
functor f( a : A ) sig
  type b
  val map : (f(a) -> b) -> f(a) -> f(b)
end

I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.

With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.

Thus roughly the following informal correspondence to Scala:

signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...

So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.

Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.

http://james-iry.blogspot.com/2010/05/types-la-chart.html

> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.

It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.

For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.

Of course it is given that everything can reference the function constructor (->).

One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.

I am fairly sleepy, I hope this was coherent.

[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Copute for dummies

Post  Shelby on Sun Aug 07, 2011 11:46 am

My secret weapon against the fascism that is descending on our globe and humorously explained.

It flys exponentially faster than a speeding bullet, they can't hear it, they can't see it, they can't even understand it.

If you had read any of this copute.com site before, you can read it again, because I edited and improved nearly every section, even as recently as today.

I also suggested you watch to this talk by the creator of Scala on the big change to parallelism that is occurring because the computer clock speed can't increase any more, only the # of cores.

Folks here is the result of several weeks of research and writing, and I have now an layman's introduction of Copute and the explanation of why and how it could change the world radically:

http://copute.com/

You may read the following sections, which are non-technical. Don't read the sub-sections unless they are explicitly listed below (because you won't understand them).

Copute (opening section)
| Higher-Level Paradigms Chart
Achieving Reed's Law
What Would I Gain?
| Improved Open Source Economic Model.
| | Open Source Lowers Cost, Increases Value
| | Project-level of Code Reuse Limits Sharing, Cost Savings, and Value
| | Module-level of Reuse Will Transition the Reputation to an Exchange Economy
| | Copyright Codifies the Economic Model
Skeptical?
| Purity
| | Benefits
| | Real World Programs
| Higher-Level
| | Low-Level Automation
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
| State-of-the-Art Languages
| | Haskell 98
| | Scala
| | ML Languages

If you want some independent affirmation of my ideas, see the comments near this end of this expert programmers blog page:

http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323

Feedback is welcome.

P.S. If you want to see what I mean about eliminating exceptions in Copute, load http://www.simplyscala.com/, then enter following line of code and click the "Evaluate" button and watch the red errors fly all over (then pat yourself on the back, you wrote your first program):

List().tail

=======================
=========ADD===========
=======================


http://Copute.com

If you are using FF or Chrome, make sure you use the horizontal scroll bar to view the content to the right.

I would like to have the columns scroll vertically, with page breaks for the height of the screen, CSS3 multicol does not provide that option. I may experiment with trying to force paged media on a screen browser later, in order to get vertical instead horizontal scrolling.

The document is about how to help people cooperate faster on the internet, using a new computer language, that will lead to much more freedom.

TIP: Do you see that western governments are trying to overthrow the governments of the countries that have oil in the Middle East and northern Africa, and they are giving money to the radical Muslim Brotherhood, because they want to cause fighting and disorganization so the oil will be shut off for some years. The reason they want to do this, is they want to make the prices of everything go very high to bankrupt all the people in the world, so that they can make us their slaves. Of course, they say this is for democracy and the freedom of the people in those countries. How stupid people are to believe them.

I suggest you read the section "Physics of Work" on my site. Then you will understand why a centralized government is always evil and going towards failure. It is due to the laws of physics. Maybe you can understand if you read that section very carefully.

http://Copute.com

Skeptical?
| Higher-Level
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Social cloud is how we will filter information more efficiently

Post  Shelby on Sun Aug 21, 2011 7:06 pm

You all know about filtering of information, because you come to this forum to read a lot of the information from the web condensed and filtered for you by people you think have a common interest and outlook (or at least related outlook worth reading).

Read this article for a laymen's explanation:

http://esr.ibiblio.org/?p=3614&cpage=1#comment-318814

Shelby wrote:
You've identified my model for the simultaneous optimization of CTR (ratio of clicks to views of ads) and CR (conversion ratio of visitors to sales).

This should once sufficiently ubiquitous, make it impossible for any entity (e.g. a blog owner) to censor or centralize the control over comments, because the comments (CR refinement) will continue into the social cloud.

Does this present a conflict of interest for Google, because if the social cloud is truly open, then how does it not eventually reduce the leverage to charge rents on advertising matching? Or does it increase the volume of ad spending (higher ROI for advertisers) and reward the business model with economy-of-scale for the cloud servers? Why wouldn't something like Amazon's commodity server model win over Google's smart servers model?

As information becomes more tailored to smaller groups, then P2P storage becomes less viable (unless we want to pay a huge bandwidth premium to store data on peers that don't use it), because there isn't enough redundancy of storage in your peer group, so it appears server farms are not going away. The bifurcating tree network topology is thermodynamically more efficient than the fully connected mesh (e.g. the brain is not a fully connected mesh, each residential water pipe doesn't connect individually to the main pumping station, etc).

P.S. I have become less concerned about the vulnerability (especially to fascist govt) of centralized storage, because as the refinement of data becomes more free market (individualized), it becomes more complex when viewed as a whole, thus the attackers will find it more challenging to impact parts without impacting themselves. Thus it appears to me the commodity server model wins, i.e. less intelligence in the server farm, and more in the virtual cloud.

Here were my prior posts on this matter:

Tension between CR and CTR for advertising!
Simultaneous maximization of both CR and CTR is Google's Achilles heel

=============================

http://esr.ibiblio.org/?p=3614&cpage=2#comment-319354

Isn't it very simple? Most of the comments have agreed, and I concur. Nature made whole foods slower to digest (e.g. fiber, not finely ground to infinite surface area, etc) so we don't spike our insulin, plaque our liver, etc.. Whole foods satisfy appetite because they contain the complex micro-nutrients our body craves (e.g. amino acids, etc), which processed foods do not. Complex carbs, sugars, fats should not be limited, because our body needs and will regulate these (even the ratios between food groups) signaled by our cravings. To change body form, increase exercise, don't limit diet. Processed carbs, sugars, and fats should be entirely avoided, as these screw up the feedback mechanism and probably cause the confusion referred to. The appetite feedback loop is out-of-whack when not consuming whole foods, and probably also when not exercising sufficiently. Except for outlier adverse genetics, no one eating only whole foods and exercising, needs to count calories and grams.

@Tom if the government wasn't taxing us for the healthcare of those who smoke, and those who breathe their smoke in public venues, then we would be impacted less by their smoking. Probably there would be less smokers if government wasn't subsidizing their health care and lower job performance, so then the nuisance for us non-smokers would also be mitigated.

@gottlob Isn't social media a revolution in information targeting, i.e. fitness, which is orthogonal to the "quality of demand" to which you refer?

Tangentially, I also think that knowledge will soon become fungible money (and I don't mean anything like BitCoin, of which I am highly critical), in the form of compositional programming modules, which will change the open source model from esr's gift to an exchange economy. Remember from esr's recent blog, my comment was that software engineering is unique in that it is used by all the others, and it is never static, and thus is a reasonable proxy for (fundamental of) broad based knowledge. I suggest a broader theory, that the industrial age is dying, which is why we see the potential for billions unemployed. But the software age is coming up fast to the rescue. Open source is a key step, but I don't think the gift economy contains enough relative market value information to scale it to billions of jobs.

@Ken doesn't genetics matter at the extremes of desired outcome or adverse genetics, where whole foods and reasonable level of physical activity is sufficient for most, e.g. some fat is desirable and necessary normally?

===================================

http://esr.ibiblio.org/?p=3634&cpage=1#comment-319373

Perhaps "exeunt" because by implication, it is the company exited the stage.

I am saddened to read that Jobs was back in hospital on June 29, that he won't be around to contribute and observe the ultimate outcome of the software age and open source. Contrasted with that I want to eliminate his captive market for collecting 30% rents on software development and dictating morality of apps. The competitive OPEN (not captive) balance is for investment capital is to extract about 10% of the increase in capital.

Of course Apple will lose market share in not too distant future, as will every capital order that exists due to more than 10% capture of the increase resulting from its investment. Btw, this is why a gold standard can never work, because society can't progress when new innovation and labor (i.e. capital) is enslaved to pre-existing capital (prior innovation and labor).

In my mind, the more interesting question is what happens to Google's ad renting captive market (I suspect taking way more than 10% of the increase in many cases), when the revolution of information targeting fitness of OPEN (i.e. not captive) social media has the same explosion into entropy that Android did to Apple. The waterfall exponential shift won't take more than 2 years once it begins. I suppose Google will be forced to lower their take (as Android will force Apple), thus exponentially increasing the size of the ad market is critical. So the motivation for Android is clear, but ironically it may accelerate Google's transformation from an ad company to a software company. But as a software company, I expect Google will be much more valuable as a collection of individuals or smaller groups, thus there will be an exodus of talent. I don't yet know how many years forward I am looking, probably not even a decade.


Last edited by Shelby on Wed Aug 24, 2011 11:00 pm; edited 2 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

How to make knowledge fungible, i.e. a currency

Post  Shelby on Tue Aug 23, 2011 5:41 am

UPDATE: a new post on Entropic Efficiency summarizes perhaps better than this post does.

Due to technical advances in code reuse, Copute, seems to have the potential to become the next mainstream programming language, regardless of whether I can successfully design a new economic model for open source, i.e. exchange (capitalism) instead of gift (socialism).

Yet I am very curious about how knowledge could become a fungible currency.

One of the key insights was that software development is the only engineering discipline that is used by all the others. Thus software development is a fundamental expression of knowledge. So if there was some way to make software development fungible, then by proxy, knowledge would be also.

The idea had been that if there was some way people could create reusable software modules, then these could be exchanged as building blocks of larger software modules and applications. In theory, Copute is designed to make the modules reusable, so the remaining two challenges for a capitalistic exchange was:

1. how to define a fungible unit of the knowledge currency
2. how to convert this unit to general economy

The unit of a knowledge currency would be, due the proposed proxy for knowledge, a unit of programming code. But the challenge was how to define such a unit, that reflects a free market value and yet remains fungible. First of all, there is no known standard measure of code value, e.g. lines of code (LOC) appears to not be well correlated with code complexity nor market value. Note that every supply/demand curve has price on one of its axis, and quantity on the other axis. Thus if there were competing programming modules and their price was equivalent, then the relative quantity of demand would determine which had more market value. Note that price is useful when it contains information about relative market value, but if the software development process needs to incorporate many modules, and modules which use other modules, price contains no relative information of market value because the developer can't possibly factor all of these economic relationships and modules wouldn't be able to reliably reuse other modules if the reused module prices could change. So the first key conclusion is that the unit of the programming code price, must be standardized based upon some metric which is reasonably correlated to effort, and then let relative market value be measured by relative quantity of demand for competing modules.

When designing the Copute grammar, I realized that everything is a function with one input parameter. So thus a non-ambiguous measure of complexity, is the number of such single-parameter functions in a module. And thus the single-parameter function is the basic unit of programming code, and thus by proxy the unit of knowledge. Thus the relative complexity (and thus relative price) of code can be automatically determined at compile-time.

But how to convert this unit to a price in the general economy, and such that incorrect automated relative price of modules would not skew the relative demand, e.g. code that could fetch a much higher price and still gain the same quantity of relative demand? The solution is of course competition. If a module owner can get a better (quantity x price) in another marketplace, they will. Additionally, the proposed marketplace will be non-exclusive, because the conversion of this knowledge unit to price in the general economy will not be based on the relative choice of modules. In other words, the consumer of these units will not pay per unit, but per a metric of the total value added.

Notice that most programming languages and libraries are given away for free. This is the gift economy, i.e. I scratch your back if you scratch mine and let's not keep very specific accounting of it. Thus the improvement should have the same level of efficiency, while capturing more market value information. So the efficiency is basically that it makes no sense to pay out a large percentage of a software development's cost (or potential profit) to reuse existing work, because otherwise new innovation and labor becomes a slave to capital (old innovation and labor). Thus the marketplace of modules offered for reuse, should only extract for example the greater of 1% of the gross sales, or 10% of the development cost, for the software project reusing any quantity of the modules offered. This 10% comes from the Bible's wisdom to take only 10% of the increase. The 1% of gross assumes that a company should pay at least 10% of gross sales in (programming, i.e. knowledge) development costs, and our model asks for 10% of the 10%. There should be some threshold, so the vast majority of individual developers never pay.

So the offer to the software developer is, you can reuse as many modules as you want from our marketplace, and you pay us the greater of 1% of your gross sales, or 10% of your development cost, of the software product developed. This income is then distributed to the module owners, using the relative market value of (single-parameter function units x quantity of module reuse).

There is one remaining problem-- bug fixes and module improvements by 3rd parties. How do we motivate 3rd party bug fixes and module improvements, without penalizing the module owner? The module owner wants bug fixes and improvements, but doesn't want to pay more for them than they are worth. It seems the best is to let the module owner set a bounty on specific items in the bug and feature request database, and that bounty can be a percentage of the module's income.

This seems very workable and sound. I am hopeful that this makes knowledge into a fungible currency, with radical implications for the world.

=================
Additional Explanation
=================

Read the entire thread that is and precedes the following linked post:

http://goldwetrust.up-with.com/t44p75-what-is-money#4535

==============================
http://esr.ibiblio.org/?p=3634&cpage=4#comment-320342

@Jeff Read
That research correlated SLOC with fault-proneness complexity. It is possible (I expect likely) that given the same complexity of application, bloated code may have more faults than dense code.

A metric that correlates with application complexity (as a proxy of relative price in, price x market quantity = value), is a requirement in the exchange model for open source that I am proposing. I will probably investigate a metric of lambda terms count and denotational semantics (i.e. userland type system) complexity.

The point is that a metric for fault-proneness complexity may not be correlated with application complexity, i.e. effort and difficulty of the problem solved.

================
UPDATE: It is very important the mininum level of royalty-free reuse of code modules have at least the following 3 qualities:


  1. Legal stipulation that the royalty-free reuse of code modules (from copute.com's repository only) must apply at least minimally to the ability of an individual to support himself with a small company. So the limits should be minimally roughly the development cost where 1 full-time programmer was employed (for the royalty on the development cost option), or the size of the market necessary to support the livelihood of a individual and his family. And the limit should be the greater of the two.
  2. Legal stipulation in the software license for all the current AND FUTURE modules, that the limits in #1 above, may never be decreased (although they could be increased).
  3. Legal stipulation that in the event that during any duration for which a court in any jurisdiction removes the force of these terms, that the license of the software module grants the people (but not corporations) in that jurisdiction the royalty-free license to all the modules on copute.com's repository, without any limitation. Then the courts will have every software module owner up-in-arms against them.


The reason is because if for example, copute.com, gained a natural monopoly due to being the first-mover and gaining inertia due to greatest economy-of-scale of modules, then it would be possible for a passive capital to gain control over copute.com and change the terms of the company to extract unreasonable (parasitic) rents from the entire community, thus we are just back to a fiat like system, where the capitalists control the knowledge workers' capital. So since those paying market-priced royalties in Copute, will be the larger corporations, if the govt decides to heavily tax it, they will tax their own passive capitalists. See the design of Copute is to tax the frictional transaction costs in the Theory of the Firm that gives rise to the corporation. Thus the design of Copute's economic model is for its value to decrease in legal tender terms as time goes on, while its value in knowledge terms increases. This is a very unique economic design, that I don't think has existed in any business model I am aware of.

Having these assurances will encourage more contribution to copute.com's repository. I would encourage competing repositories, and for them to adopt a similar license strategy.


Last edited by Shelby on Sun Sep 11, 2011 5:38 am; edited 7 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Creator of the browser, says software will now take over the world

Post  Shelby on Tue Aug 23, 2011 9:00 am

Read my prior post about software being in everything and thus is a proxy for knowledge, then read this:

http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html (make sure you watch the video!)

Here is an excerpt:

My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Why is this happening now?

Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

Over two billion people now use the broadband Internet, up from perhaps 50 million a decade ago, when I was at Netscape, the company I co-founded. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.

On the back end, software programming tools and Internet-based services make it easy to launch new global software-powered start-ups in many industries—without the need to invest in new infrastructure and train new employees. In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon's cloud costs about $1,500 a month.

With lower start-up costs and a vastly expanded market for online services, the result is a global economy that for the first time will be fully digitally wired—the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.

Oops.

Today, the world's largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.

Today's largest video service by number of subscribers is a software company: Netflix. How Netflix eviscerated Blockbuster is an old story, but now other traditional entertainment providers are facing the same threat. Comcast, Time Warner and others are responding by transforming themselves into software companies with efforts such as TV Everywhere, which liberates content from the physical cable and connects it to smartphones and tablets.

Today's dominant music companies are software companies, too: Apple's iTunes, Spotify and Pandora. Traditional record labels increasingly exist only to provide those software companies with content. Industry revenue from digital channels totaled $4.6 billion in 2010, growing to 29% of total revenue from 2% in 2004.

Today's fastest growing entertainment companies are videogame makers—again, software...

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

I'm becoming skeptical about the claim that pure functional is generally log n slower

Post  Shelby on Thu Aug 25, 2011 6:54 am

http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/7194832#7194832

Shelby Moore III wrote:
I am becoming skeptical about [the claim][1] that pure functional is generally O(log n). See also Edward Kmett's answer which makes that claim. Although that may apply to random mutable array access in the theoretical sense, but random mutable array access is probably not what most any algorithm requires, when it is properly studied for repeatable structure, i.e. not random. I think Edward Kmett refers to this when he writes, "exploit locality of updates".

I am thinking O(1) is theoretically possible in a pure functional version of the n-queens algorithm, by adding an undo method for the DiffArray, which requests a look back in differences to remove duplicates and avoid replaying them.

If I am correct in my understanding of the way the backtracking n-queens algorithm operates, then the slowdown caused by the DiffArray is because the unnecessary differences are being retained.

In the abstract, a "DiffArray" (not necessarily Haskell's) has (or could have) a set element method which returns a new copy of the array and stores a difference record with the original copy, including a pointer to the new changed copy. When the original copy needs to access an element, then this list of differences has to be replayed in reverse to undo the changes on a copy of the current copy. Note there is even the overhead that this single-linked list has to be walked to the end, before it can be replayed.

Imagine instead these were stored as a double-linked list, and there was an undo operation as follows.

From an abstract conceptual level, what the backtracking n-queens algorithm does is recursively operate on some arrays of booleans, moving the queen's position incrementally forward in those arrays on each recursive level. See [this animation][2].

Working this out in my head only, I visualize that the reason DiffArray is so slow, is because when the queen is moved from one position to another, then the boolean flag for the original position is set back to false and the new position is set to true, and these differences are recorded, yet they are unnecessary because when replayed in reverse, the array ends up with the same values it has before the replay began. Thus instead of using a set operation to set back to false, what is needed is an undo method call, optionally with an input parameter telling DiffArray what "undo to" value to search for in the aforementioned double-linked list of differences. If that "undo to" value is found in a difference record in the double-linked list, there are no conflicting intermediate changes on that same array element found when walking back in the list search, and the current value equals the "undo from" value in that difference record, then the record can be removed and that old copy can be re-pointed to the next record in the double-linked list.

What this accomplishes is to remove the unnecessary copying of the entire array on backtracking. There is still some extra overhead as compared to the imperative version of the algorithm, for adding and undoing the add of difference records, but this can be nearer to constant time, i.e. O(1).

If I correctly understand the n-queen algorithm, the lookback for the undo operation is only one, so there is no walk. Thus it isn't even necessary to store the difference of the set element when moving the queen position, since it will be undone before the old copy will be accessed. We just need a way to express this type safely, which is easy enough to do, but I will leave it as an exercise for the reader, as this post is too long already.


[1]: http://goldwetrust.up-with.com/t112p180-computers#4437
[2]: http://en.wikipedia.org/w/index.php?title=Eight_queens_puzzle&oldid=444294337#An_animated_version_of_the_recursive_solution

The backtracking n-queen algorithm is a recursive function that takes 3 parameters, an array for each diagonals direction (\ and /), and a row count. It iterates over columns on that row, moving the queen position on that row in the arrays, and recursing itself on each column position with cur_row + 1. So seems to me the movement of the queen position in the arrays is undoable as I have described in my answer. Does seem too easy doesn't it. So someone please tell me why, or I will find out when I write out an implementation in code.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Industrial Age is being replaced by the Software (Knowledge) Age

Post  Shelby on Thu Aug 25, 2011 7:16 pm

http://esr.ibiblio.org/?p=3634&cpage=2#comment-319568

I think perhaps my prior comment was not coherent enough.

With smartphone hardware costs trending asymptotically towards the cost of 100 grams of sand, plastic, and mostly non-precious metals, the future profit margins are in software. I previously wrote that the industrial age is dying, to be displaced by the software (knowledge) age. Automation is increasing, costs are declining towards material inputs, thus aggregate profits (and percentage share of the economy) are declining for manufacturing, even if profit margins were maintained (which they are not because 6 billion agarians are suddenly competing to join the industrial age in the developing world). There is now even a $1200 3D printer. Wow.

Assuming the smartphone is becoming a general purpose personal computer, the software paradigm that provides unbounded degrees-of-freedom, can in theory gain at an exponential rate more use cases over a bounded platform.

Even if Apple competes on the low-price end, I predict their waterfall implosion will be driven by some aspect of "web3.0" that diminishes their captive high rents on content and cloud services, because this will cut off their ability to subsidize software control (and the low-end hardware), i.e. the subsidy that they are not leveraging the community capital via unbounded open source. Such a paradigm shift may also threaten Google's captive high rents on ad services, but Google leverages open source to minimize the subsidy. I envision that Google will lose control over the platform once the high rate of market growth slows and vendors compete for a static market size pie. That will be a desirable outcome at that stage of maturity.

The high-level conceptual battle right now is not between hardware nor software platforms, features, etc.. It is a battle between unbounded and bounded degrees-of-freedom. The future belongs to freedom and inability to collect high rents by capturing markets in lower degrees-of-freedom. So I would bet against all large corporations (eventually), and bet heavily on small, nimble software companies.

@Winter
I agree that the future profit margins belong to the owners of human knowledge (distinguish from mindless repetitive labor that adds no knowledge), i.e. the individual programmers. Services are trending asymptotically (i.e. but never reaching) full automation, meaning that programming will move continually more high-level forever. Software is never static.

Thus, services is software. Knowledge is software.

I have written down (see the What Would I Gain? -> Improved Open Source Economic Model, and scroll horizontally) what I think are the key theoretical concepts required to break down the compositional barriers (lack of degrees-of-freedom) so the individual can take ownership of his capital. I have emailed with Robert Harper about this. Afaics, once this is in place, then large companies will not be able to take large rent shares. We are on the cusp of age of the end of the large corporation and the rise of the individual programmers, hopefully millions or billions of them. Else massive unemployment.

@Winter
Agreed. The bits are never static. They continually require new knowledge to further refine and adapt them. It is not the bits that are valuable, but the knowledge of the bits, and how to fix bugs, improve the bits, interopt with new bits, and compose bits. And this process never stops, because the trend to maximum entropy (possibilities) never ceases (2nd Law of Thermo). What makes software unique from (fundamental of) all other engineering disciplines, is that software is the encoding in bits, the knowledge of the other disciplines– a continual process.

But actually it is not an encoding in bits. It is an encoding in continually higher-level denotational semantics. The key epiphany is how we improve that process, and the tipping point where it impacts the aggregation granularity of capital formation in the economic model of that process. If you understand language design and the references I cited in them, the links I provided might be interesting (or the start of debate).

http://esr.ibiblio.org/?p=3634&cpage=2#comment-319625

@nigel Larger software teams accomplish less are due to the Mythical Man Month. My conjecture is that individual developers will become the large team sans the compositional gridlock of the MMM, with the individual contributions composing organically in a free market. I realize it is has been only a dream for a long-time. On the technical side, there is at least one critical compositional error afaik all these languages have made, include the new ones you mentioned. They conflated compile-time interface and implementation. The unityped (dynamic) languages, e.g. Python, Ruby, have no compile-time type information.

If we define complexity to be the loss of degrees-of-freedom, I disagree that the complexity rises. Each higher-level denotational semantics unifies complexity and increases the degrees-of-freedom. For example, category theory has enabled us to lift all morphism functions, i.e. Type1 -> Type2, to functions on all functors, i.e. Functor[Type1] -> Functor[Type2]. So we don't have to write a separate function for each functor, e.g. List[Int], List[Cats], HashTable[String]. Perhaps complexity has been rising because of the languages we use. We haven't had a new mainstream typed language since Java, which is arguably C++ with garbage collection. Another example, before assembly language compilers, we had the complexity of manually adjusting the cascade of memory offsets in the entire program every time we changed a line of code.

Indeed it can consume a decade or more for a language to gain widespread adoption, but that isn't always the case, e.g. PHP3 launched in 1997 and was widespread by 1999. Afaik, a JVM language such as Scala can compile programs for Android.

@Winter, agreed. It is my hope that someday we won't need to pay for all the complexity bloat and MMM losses. We will get closer to paying the marginal cost of creating knowledge, instead of the marginal costs of captive market strategies, code refactoring, "repeating myself", etc.. I bet a lot boils down the fundamentals of the language we use to express our knowledge. Should that be surprising?

@phil captive markets grow as far as their subsidy can sustain them, then they implode, because of the exponential quality of entropy and the Second Law of Thermodynamics (or otherwise stated in Coase's Theorem). Apple's subsidy might be stable for as long as their content and cloud revenues are not threatened, perhaps they can even give the phones away for free or negative cost. That is why I think the big threat to Apple will come from open web apps, not from the Android hardware directly. The Android hardware is creating the huge market for open apps. I guess many are counting on HTML5, but the problem its design is a static and designed by committee, thus not benefiting from network effects and real-time free market adaptation. I would like something faster moving for "Web3.0" to attack Apple's captive market.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

More on Copute's exchange model for open source; and Industrial Age decline

Post  Shelby on Sat Aug 27, 2011 8:01 pm

http://esr.ibiblio.org/?p=3634&cpage=3#comment-319827

@nigel
I agree with esr's Inverse Commons thesis. Apparently there is amble evidence of the success of open source. The involvement of corporations is anticipated by that model, thus if anything provides more evidence of the model's success. The "gift or reputation" component of the model is in harmony with the strategic benefit to corporations. I also concur with esr's stated reasons for doubting how an exchange economy could work for open source.

However as a refinement, I also think the lack of an exchange economy in that model means that mostly only entities who need a commons software improvement are motivated to participate. I know this is true for myself. To broaden the impact of open source, and motivate people to contribute for the income directly correlated to the market value of their contribution, I have in theory devised a way to enable the open source exchange economy. Notably it doesn't require copyright policing, nor anyone to assign a monetary value to a module, nor micro-payments, nor must it be one centralized marketplace. It is all annealed by the market. Relative value is calculated by relative module use, and relative price in an indirect way. Nominal price is never calculated. And for the vast majority of users and certainly all non-profit ones, it remains a "gift or reputation" economy.

It is my hope that this can drive millions or billions of people to become programmers. I might be wrong about this though, and I remain committed to the Inverse Commons as well. Please note that my theory is that adding more fine-grained relative value information to a market, can make it more efficient (assuming there are no bottlenecks), because there would be more stored information and degrees-of-freedom annealed by the free market. Relative price is information. So my model is not so much about exchange of fiat currency, but about measuring this information. My "pie in the sky" dream is the knowledge, with software modules as the proxy, becomes fungible money and thus a currency. Note that gaming currencies became so widespread that China outlawed their convertibility to yuan.

@The Monster
The opposite actually. Aggregate debt is growing nominally by the aggregate interest rate, while it is serviced. Aggregate debt can shrink during implosive defaults, but not if the defaults are transferred to government debt, as is the case in western world today.

I understand your argument that real debt isn't growing if production increases faster than debt (a/k/a positive marginal-utility-of-debt), but you discount the damage due to supply and demand distortion. In fact, the western world is now in negative marginal-utility-of-debt, i.e. the more public debt we add, the faster real GDP shrinks.

The explained by the disadvantage of a guaranteed rate of return compared with equity, because the investor has less incentive to make sure the money is invested wisely by the borrower, i.e. passive investment. No amount of regulation can make it so. The growth of passively invested debt causes mutually escalating debt-induced supply and demand. When that implodes (due to the distortion of the information in supply and demand the debt caused), the capitalist lender demands the government enforce the socialization of his losses. Thus the fixed interest rate (usury) model is an enslavement of labor and innovation to passive stored capital, and is the cause of the boom and bust cycle. Equity is in theory a far superior model. But the problem with equity is the attempt to guarantee rate of return via captive markets (a/k/a monopolies or oligopolies), i.e. again stored capital wants to be passive. The basic problem is stored capital, i.e. the concept that what we did in the past has a value independent of our continued involvement. I trying to end the value of passive stored capital with my work on an exchange open source economy, and I think it is declining anyway with the decline of industrial (capital intensive) age.

@nigel
I think "individual devs will supplant large teams funded by large corporations", and it will be because the marginal cost of software is not free. In The Magic Cauldron, my understanding is that esr argues that software is not free and has costs that in most cases could never be recovered by selling it. The Use-Value and Indirect Sale-Value economic models presented in The Magic Cauldron, seem to acknowledge that open source will be funded by corporations. I think there can be an exchange model which can enable individual devs to function more autonomously, but it is achieved, it will be because software is not free and has use value cases that can be worked on independently in compositional modules.

@The Monster
Evidence says that the total (a/k/a aggregate) debt of the fiat system increases at the compounded prevalent interest rate. For my linked graphs, note that M3 is debt in a fiat system, because all fiat money is borrowed into existence in a fractional reserve financial system.

Apparently the reason for this debt spiral is because even while some pay down debt, that debt elevated demand, which escalates supply and debt, which then escalates demand, which then escalates supply and debt, recursively until the resources feeding the system become too costly, then it implodes. Also there are at least two other reasons. When money is borrowed into existence from a bank's 1-10% reverse ratio and is deposited, it increases the bank's reserves, thus increasing the amount that can be loaned into existence. Perhaps more importantly, the money to pay off the debt, has to be created (since the entire economy is run on debt money), and thus must grow at the compounded interest rate on the aggregate in the economy, as the evidence shows. So raising interest rates to slow down the economy, actually increases the rate at which the debt grows.

I did not criticize storage of sufficient inventories. Physical inventories are becoming a smaller component of the global economy, and I bet at an exponential rate. I criticized passive capital, meaning where our past effort extracts captive market rents on the future efforts of others, simply for doing nothing but carelessly (i.e. passively with guaranteed return) loaning digits which represent that past effort (or guaranteeing ROI with monopolies, collusion with the government, etc). Contrast this against say offering some product, and the ongoing effort to support that product, i.e. active investment in any venture where your past experience is being applied most efficiently towards active current production, which would include equity investments based on your active expert knowledge of what the market needs most. What you wrote about Ric, does not disagree with my thesis. For example, as I understand Esr's thesis about use versus sale value in The Magic Cauldron, it says open source program code can't be rented, unless there is ongoing value added, i.e. the value is in the service not the static bits. He mentions the bargain bin in software stores for unsupported software. Machine tools are critically important, but not the raw material inputs and not so much so the machine itself, rather the knowledge of the design, operation, and future adaptation and improvement.

@nigel:
Btw, I worked briefly on Corel Painter in mid-1990s, when it was Fractal Design Painter, and Steve Guttman came to us from VP of Marketing of Adobe Photoshop (he is now VP at Microsoft and Mark Zimmer is now making patents for Apple). I escaped from that mentality and software model, under which my creative abilities were highly under-utilized because we had to give way to the founder heros (and I took advantage of it too). I appreciated the learning experience and opportunity to work with people with 160+ IQs (Tom Hedges purportedly could memorize a whole page of a phone book by glancing at it), but I also see the waste (captive enslavement by those who need a salary) of resources in that non-optimal allocation model. I have not worked for a company since.

With a compositional model, I assert proprietary software is doomed to the margins. Open source increases cooperation. No one can make the cost of software zero. Open source is a liberty, not a free cost, model. My understanding is that Richard Stallman and the FSF are against OSI's replacement of the word "free" with "freedom". My understanding is because FSF requires that the license must disallow charging for derivative software so that the freedom-of-use is not subverted by forking, but perhaps this is in tension with the reality of the nonzero cost of software and the types of business models that derivatives might want to explore. I may not fully understand the rift, or maybe there is no significant rift, yet apparently there is some misunderstanding outside the community of what is "free" in open source.

If we have technology such that software modules are compositional without refactoring, I think this tension in derivative software will diminish, because then any derivative module (which is a composition of modules) is a completely orthogonal code, and thus may have a separate license from the modules it reuses without impacting the freedom-of-use of the reused component modules, because the reused modules will not have been forked nor modified. Thus I propose that with a compositional computer language, individual modules can be created orthogonally by individual devs and small teams, and thus the importance of corporations will fade.

@Jeff Read and @uma:
In my "pie in sky" vision, the corporations can still exist to slap on compositional glitter to core modules created by compositional open source. And they can try to sell it to people, but since the cost of producing such compositions will decline so much (because the core modules have been created by others and their cost amortized over greater reuses), then the opportunities to create captive markets will fade also. In a very abstract theoretical sense, this is about degrees-of-freedom, fitness, and resonance (in a maximally generalized, abstract definition of those terms).

The cost of creating the core modules is not zero, and so I envision an exchange economy to amortize the costs of software in such a compositional open source model. But first we need the compositional technology, which is sort of a Holy Grail, so skepticism is expected. I am skeptical and thus curious to build it to find out if it works. However, if there is someone who can save me time by pointing out why it can't work, that would be much appreciated. Which is perhaps why I mentioned to the great thinkers here. Also to learn how to become a positive member of this community.

@The Monster:
I don't see how the conclusion would be different, whether the growth of total debt causes the interest rate to be correlated, or vice versa. I.e. Transposing cause and effect makes no difference. Even if the correlated total debt and interest rate are not causing the other, the conclusion remains that total debt grows at the prevalent interest rate compounded. And I don't think you are refuting that an increase in debt, increases demand and supply (of some items) in the economy. Recently it was a housing bubble. Loans pull demand forward, and starve the future of demand.

@The Thinking Man:
The problem with + for string concatenation is only when there is automatic (implicit) conversion of string to other types which use the same operator at the same precedence level, i.e. integers. This causes an ambiguity in the grammar. Eliminate this implicit conversion, and + is fine.

I read that Objective C does not support mixin multiple inheritance, thus it can not scale a compositional open source model. I don't have time to fully analyze Objective C for all of its weaknesses, but it probably doesn't have a bottom type, higher kinds, etc.. All are critical things for the wide area compositional scale. Thus I assume Objective C is not worth my time to analyze further. I know of only 3 languages that are any where close to achieving the composition scale, Haskell, Scala, and Standard ML. Those are arguably obtuse, and still lack at least one critical feature. I realize this could spark an intense debate. Is this blog the correct place?

@shelby
About the rise of the consultant, what you are refering to is the theory of the firm.

http://en.m.wikipedia.org/wiki/Theory_of_the_firm

@Winter so Transaction Cost Theory defines the natural boundary and size of the corporation. They mention Coase's Theorem. Thanks.

@uma:
I agree if you meant not only FP, but immutable (i.e. pure, referentially transparent) FP. Also must have higher-kinded, compile-time typing, and this can be mostly inferred and unified higher-level category theory models hides behind the scenes to eliminate compositional tsuris, without boggling to mind of the average programmer.

I understand, because I initially struggled to learn Haskell, Scala, and Standard ML. If we make PFP easier, more fun, more readable, less verbose, and less buggy than imperative programming, perhaps we can get a waterfall transition. Note PFP is just declarative programming (including recursion), and declarative languages can be easy-to-use, e.g. HTML (although HTML is not Turing complete, i.e. no recursion). This is premature to share as I don't have a working compiler, no simple tutorial, only the simple proposed grammar (SLK syntax, LL(k), k = 2), and some example code. I found many confusing things in Haskell and Scala to simplify, or entirely eliminate, including the IO Monad, that lazy nightmare, Scala's complex collection libraries, Scala's implicits, Scala's mixing of imperative and PFP, Java & Scala type parameter variance annotations are unnecessary in a pure language, etc..

@john j Herbert:
Apple's gross appstore revenue for 2011 is projected to be $2 - 4 billion. And total appstores annual gross revenue is projected to rise to $27 billion in 2 years. While hardware prices and margins will decline, the confluence thus perhaps lends some support to my thought that the future waterfall decline threat to Apple is an open app Web3.0 cloud. Perhaps Apple's strategy is to starve the Android hardware manufacturers of margins, as the margins shift to the appstore and eventually total smartphone hardware volume growth decelerates and the debt driven expansion starves future hardware demand. I note the battle with Amazon over the name "Appstore". I broadly sense that Apple may be trying to create a captive internet, but I haven't investigated this in detail.

@jmg:
My understanding is that Smalltalk is anti-compositional, i.e. anti-modular, because it doesn't have the sufficient typing system[1], e.g. subtyping, higher-kinds, and diamond multiple inheritance. You can correct my assumption that an object messaging fiction doesn't remove that necessity.

@uma:
Agreed that interleaving FP and imperative adds complexity to the shared syntax for no gain if purity (a/k/a referential transparency) is the goal (and this apparently applies to Clojure too), because in functionally reactive programming (i.e. interfacing IO with PFP), the impure functions will be at the top level and call the pure functions[2], a simple concept which doesn't require the mind-boggling complication of Haskell's IO monad fiction.

Clojure is not more pure than Scala, and is only "configurably safe". A Lisp with all those nested parenthesis requires a familiarity adjustment (in addition to the digestion of PFP) for the legions coming from a "C/C++/Java/PHP-like" history. I doubt Clojure has the necessary type system for higher-order compositionality[1].

My point was we need all of those advantages in one language, for it to hopefully gain waterfall adoption. The "easier" and thus "more fun" seem to be lacking in the only pure FP language with the necessary type system[1], Haskell. And Haskell can't do diamond multiple inheritance, which is a very serious flaw pointed out by Robert Harper and is apparently why there are multiple versions of functions in the prelude. All the other non-type-dependent languages have side-effects which are not enforced by the compiler, or don't have the necessary type system. Type-dependent languages, e.g. Agda, Epigram, and Coq, are said to be too complex for the average programmer, which esr noted in his blog about Haskell.

I agree the IDE tools are lacking, but HTML demonstrated that a text editor is sufficient. I disagree that any of those other languages can become the next mainstream language, regardless how good their tools become, because they don't solve the compositional challenge[1], so what is the motivation for the vested interests of leaving the imperative world then? I think a language that solves the compositional challenge, will "force" adoption because its collective community power in theory grows like a Reed's Law snowball, i.e. it will eat everything due to the extremely high cost of software and the amortization of cost in a more granular compositional body of modules.

@phil:
The prior paragraph derives abstractly from thermodynamics, i.e. economics. State and variables exist in PFP. What PFP does is make the state changes orthogonal to each other[3].

@uma:
The time indeterminism in Haskell is to due to lazy evaluation, which isn't desirable. See "lazy nightmare" link in my prior post for the rationale. Orthogonal to the indeteminism issue, where the finely-tuned imperative control of time is necessary, which btw is always a coinductive phenomena[2] and thus anti-compositional, this goes in the top-level impure functions in Copute.

@Jeff Read:
IO in PFP requires the compositional way of thinking about the real world[2], i.e. a coinductive type. The practical example[2] is easy for the average programmer to grasp. It is just a generalization of the inversion-of-control principle. This stuff isn't difficult, it was just difficult to realize it isn't difficult. Once that "a ha" and it comes clear in the mind, it is like "why wasn't I always doing it this way".

Sections of my site (scroll horizontally):
[1] Skeptical? -> Expression Problem, Higher-Level, and State-of-the-Art Languages.
[2] Skeptical? -> Purity -> Real World Programs.
[2] Skeptical? -> Purity -> Real World Programs -> Pure Functional Reactive -> Declarative vs. Imperative.



@Nigel
It may be true that there are cases where there are transactional costs for uncoordinated software development, that leave captive markets for the corporation. That isn't an indictment on the open source model of cooperation in Inverse Commons which amortizes costs and risks, but imo rather an orthogonal indictment of the technology we currently have for software development.

Hypothetically, if a huge software project could be optimally refactored such that it had the least possible repetition, and if I was correct to assert that mathematically this requires the components (e.g. functions and classes) to be maximally orthogonal, then what would happen to your assertion that only big software companies will ever be able cooperate to create huge projects?

In the theory of the firm that Winter shared, the reason the corporation exists is because there a transactional cost (or risk cost) for uncoordinated cooperation. So what is the nature of that transactional cost in software? Afaics, it is precisely what causes the Mythical Man Month, i.e. getting all devs on the same wavelength, because the code they write all has interdependencies. But if there is maximal orthogonality of code, then the communication reduces to the public interface between code. Also with a higher-level models, such as Applicative, they automatically lift any function of any number of parameters of unlifted types, T -> A -> B -> C ..., to higher-kinds of those types, i.e. you get for free all functions of type (without having to write infinite special case boilerplate), e.g. List[T] -> List[A] -> List[B] -> List[C] and any other class type that inherits from Applicative, not just List. This is the sort of reuse and orthogonality that could radically reduce the transaction costs for uncoordinated development. With a huge preexisting library of orthogonal modules, a small team of the future could possibly be able to whip up large compositions at exponentially faster rate. We have a huge body of code out there today, but my understanding is it often difficult to reuse, because it takes more time to learn what it does, extricate and refactor the needed parts. I have not read Esr's book on unix philosophy and culture, but I think I understand intuitively that it has a lot to do with using orthogonal design, e.g. pipes in the shell with many small utility commands (programs). Although it might seem that code for different genres of programs are totally unrelated, I am finding in my research that maximal orthogonality causes more generalized code.

I can rightly be accused of over-claiming without a working body of code (so I better shut up), and on the other extreme you wrote we can n"ever" progress. I hope I can change your mind someday.

From email:
>> http://goldwetrust.up-with.com/t151-book-ultimate-truth-chapter-3-capital-is-not-money#4540
>
> Not true. Under the gold standard the money actually buys more over time.

Yeah but that is not what I said. I said the nominal increase is not a proportional increase, i.e. your portion of the entire economy decreases.

However, if gold is the only thing that is money, then your proportion would only decline by the mining rate of gold, and this is why I say we can never (nor should we) make gold the only thing that is money, because then it would mean passive capital owns future innovation.

Read more at Passive Stored Capital is Always Fleeting (Depleting).

http://esr.ibiblio.org/?p=3689&cpage=5#comment-321944

The events leading out of feudalism appear to be attempts to free humanity from the slavery of unmotivated passive capital, whose power was sustained by the marriage of state and religion (which outlawed the "sin" of usury), by using debt to bypass and compete against hoarded private capital. I wrote previously that gold can't be the only money, otherwise passive capital enslaves all the future innovation, because all profits are captured as a deflation relative to gold. Appears mankind has been oscillating between debasement blowback (Roman empire and now) and no debasement motivating capital hoarding (feudalism).

The fundamental problem is that in a material world, the transactional cost in The Theory of the Firm (thanks Winter), enables the corporate capital to accumulate faster than for those who produce the knowledge. However, I think we are entering a radical paradigm shift, where knowledge (the mind) becomes much more valuable than material production. Because industry can be automated (see the $1200 3D printer) but the knowledge isn't static and can't be automated. I refuted Kurzweil's Singularity and debated Chomsky on Hume's mitigated skepticism (the upshot is I argue that abstract math and infinity exist and are equivalent to the never ending universal trend to maximum entropy).

http://esr.ibiblio.org/?p=3695&cpage=2#comment-321961

@Nigel:
hardware prices remain fairly static and manufacturers just provide greater capability at the same price points

The greatest capability increase of the past decade has been the knowledge deposited on the internet, the consumption of which has supplanted compiling as my main knowledge activity. For that purpose, my less than $100 computer (in 1990s dollars) works as well as $1000+ one I needed a decade ago to compile faster (when compiling was my main activity). I am not factoring in the price of the monitor, as these have razor thin profit margins, and remember my point is that profit margins drive the relative "nominal" (global aggregate) profit when comparing hardware vs. software.

And the price is not my point, but rather that per-unit profit margins in hardware are declining, because the economies-of-scale are increasing with the physical production automated (or cheap labor which is in oversupply as we enter the knowledge age). Thus the aggregate profits are becoming a relatively lower percentage of total profits in the world, when the comparison is between industry in general versus software and knowledge production. In short, all profits are derived from the knowledge portion of the business, not the physical production.

Apple apps still perform better than open apps regardless of underlying technology

That debatable quality advantage isn't sustainable, because the Inverse Commons has proven numerous times to be the winning economic model.

@Winter:
So, in the end, it is work that will pay. In the end, work by the hour.

Knowledge is can't be automated. Labor by-the-hour is not correlated with knowledge production, and is often anticorrelated, e.g. the Mythical Man Month. The belief that labor and knowledge are equivalent is the fallacy of communism.


Last edited by Shelby on Thu Sep 08, 2011 2:28 am; edited 5 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Eric Schmidt says Google+ is to track your identity

Post  Shelby on Tue Aug 30, 2011 8:35 pm

http://esr.ibiblio.org/?p=3500&cpage=1#comment-320303

Note the original "issues analysis" link can no longer be read without login to G+. Is the future of the internet that we can't access information without tracking our identity?

Eric Schmidt says that G+ is really an "identity service, so fundamentally it depends on people using their real names if they’re going to build future products that leverage that information" (presumably for an advertising database). Is that assured "do no evil" to create a centralized global database of identities and track all the social groupings (i.e. interests, political sub-groupings, business affiliations, interest in certain ideas due to link crowd-sourcing, etc)?

I propose we in open source can create our own open decentralized social network, without depending on a large corporation.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Chile is giving away $40,000 to startups that relocate to Chile

Post  Shelby on Wed Aug 31, 2011 1:18 pm

I inquired:

http://www.startupchile.org/2011-round-2-now-closed/#comment-5193

Shelby wrote:
Would the outline of my startup at my website be adequate? (scroll horizontally)

You can also read my numerous comments at this blog of Eric Raymond, the creator of the open source movement:

http://esr.ibiblio.org/?p=3634#comments

Does this offer a path to permanent residency and citizenship?

I don’t need the $40,000, and I don’t have a lot of time to waste. But I am very interested in South America and helping to lead your developers.

My project is most definitely global, I expect it affect everything.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Anonymity, Google, and future of the world

Post  Shelby on Thu Sep 01, 2011 12:22 am

Regarding this post:

Eric Schmidt says Google+ is to track your identity

Shelby replied in email:
Nothing is free ever. Liberty yes, but zero cost doesn't exist.

Technically speaking, you are wrong when you say a decentralized, anonymizing network can't limit the degree of invasion of privacy. Although it is true that a determined hacker (such as the govt) can always break through anonymity shields, in real terms it can raise the reasonable cost for them to do it, such that 99.9% of the people with be effectively anonymous. There is a huge difference between that and what Google is doing. HUGE MY FRIEND. VERY HUGE.

One thing you didn't realize is the independent actions in free markets look random and disorganized. The structure is hard to identify, so it is difficult for those who own the connecting wires to use the information, because it won't mean anything to them. Yet still the free marketing anneals a global optimum result:

http://esr.ibiblio.org/?p=3614#comment-320296

Please don't say it will never exist, unless you can justify that statement technically. I have been studying and thinking about P2P and decentralization since at least 2006. Heck I even explained to BitTorrent that their economic algorithm was flawed:

http://forum.bittorrent.org/viewtopic.php?id=28

I am implementing that decentralization now with Copute.

Copute will eliminate the "power of top-down control", which you pigeon-hole as being only psychiatry.

Apparently you don't understand that there are degrees of anonymity on the internet. At the extreme, one logs in from an internet cafe and refuses cookies in his browsing and never logins in with any identity to any site. Next level is to use a VPN and then at least authorities have to raid your VPN provider to get IP correlation to your identity.

The degree of fine-tuned information is much greater with Google because it is every where (I touch their domain on nearly every page I visit). I will give you an example.

Google writes a cookie for *.google.com, then you login to any Google service, including blogger, Gmail, etc, then you logout. Now google can correlate your exact identity via that cookie whenever you surf to any website that has an advertising coming from *.google.com

Additionally google+ using social crowd-sourcing, will be able to narrow down your personality profile and other statistics about you to such a degree that they will perhaps know you better than you know yourself:

http://esr.ibiblio.org/?p=3614#comment-320285
http://esr.ibiblio.org/?p=3614#comment-318814

Should someone with connections with the "authorities" want to get a list of all people who oppose something or are vested in some competing business or movement, then in theory they can under the dictatorial powers created by the various executive orders and laws passed since 9/11.

Also I am planning to do something about this. That is the reason I raise this issue. There is an alternative coming.

Btw, the progress on Copute is going amazingly well and I expect to start launching and changing the world before the end of 2011. Here is the latest code samples and you can read my numerous comments at the following blog of Eric Raymond one of the main proponents of Open Source (he wrote the imfamous The Cathedral and the Bazaar):

http://copute.com/dev/docs/Copute/ref/std/
http://esr.ibiblio.org/?p=3634#comment-319373 (Ctrl+F for "Shelby" to find all)

Note Google says they anonymize data after 18 months, but since we are always renewing the data, that effectively means forever.

> A non-issue.
> So what if Google+ forces real names.
>
> Anyone who has been online for 3 months (probably less) has his identity
> known, at least in the US and most of EU. Many people believe thet they
> are "annonymous" when online, or they believe that what they do is not
> associated with their real identities. Hogwash. The US and any other
> allied govts have long ago employed systems that monitor Internet
> traffic, such as Carnivore, Eschelon and other secret systems.
>
> Yes, a person can be somewhat annonymous up to the point where a govt
> decides that they want to identify "suspects". How do you think they
> catch hackers and catch members of Anon and other hacker groups? It's
> not done by legwork. It's done by rapidly culling databases of captured
> traffic. It's takes longer to id these guys because they route all
> their traffic via onion networks, but even those networks are vulnerable
> to the sophisticated govt monitoring systems.
>
> The same data gathered by Google, Facebook and other Internet sites is
> also gathered in physical locations like malls, stores, gas stations,
> grocery stores, etc. Ever since plastic has been used as a payment
> method, these corps have been collecting and sharing their databases of
> consumer activity.
>
> Go to a movie, buy the ticket using anything other than cash and your
> interest gets recorded. Buy gas, throw in a package of M&Ms and your
> candy choice gets recorded. Do that several times over the course of a
> few months and you will start to receive snail mail ads from the candy
> company.
>
> Almost every purchase made today has a clause in its purchase agreement
> that "some info will be shared with partner organizations."
>
> And when you "follow the money" you discover that there are just several
> very large corps at the top which own and control all the smaller
> corps. Thus, the term "partner companies" becomes "just about every
> other company out there".
>
> Those that scream and natter about "my privacy is being violated"
> usually always have something to hide, some criminal activity that they
> are involved in of were involved in in the past, and don't want to be
> found out about. They either murdered someone or they cheated on state
> tax returns, or they yelled at the wife for flirting with the neighbor
> in 1978. Or any transgression inbetween the extremes.
>
> I am not saying we should not stand up & fight for our human rights. I
> am saying that it is unnecessary to focus on Internet data collection
> and more viable to focus upon methods of changing the world that are
> effective. A real, free, secure, decentralized networking system will
> not change the world. No such system has ever existed nor will it ever
> exist unless one also owns and controls the Internt itself. I am
> talking about the actual Internet by definition, which means the
> hardware, infastructure, satellites, cables, towers, etc. Unless one
> controls those, any govt, Internet provider or large powerfiul
> corporation can use the infastructue as they see fit (monitor the
> traffic and collect the packets).
>
> "In the interest of national security" is something the US uses to get
> its hands on anything they want, including Google's algorythms. You
> think Google wins aginst the US govt security agencies. Ha! What we
> see re technology legal battles in the media is only what they allow us
> to see.
>
> There's really only one surefire method of changing the world into one
> where human rights are a reality: Eliminate the groups and individuals
> that are responsible for causing the mindset of humaniy to believe that
> man is but an animal to be controlled. This philosophy stems from
> Wundt, Pavlov and Marx, and has permeated every major facet of human
> activity. Socialist and Communist ideals bring about the downfall of a
> civilization.

Shelby
Admin

Posts: 3107
Join date: 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Page 14 of 17 Previous  1 ... 8 ... 13, 14, 15, 16, 17  Next

View previous topic View next topic Back to top


Permissions in this forum:
You cannot reply to topics in this forum