Computers:

Page 5 of 11 Previous  1, 2, 3, 4, 5, 6 ... 9, 10, 11  Next

View previous topic View next topic Go down

Subtype semantic contract is typing

Post  Shelby on Mon Jan 17, 2011 6:01 am

http://lambda-the-ultimate.org/node/1551#comment-64183

This is a reply to my post yesterday (posted 7am EST, 12pm EST midnight now, 12am noon for me in Asia), but it will not appear properly indented under my prior post, because I am unable to click "reply" to my prior post, because it hasn't yet appeared on this page (not yet approved by the moderator), not even visible to me privately while logged into LtU.

Shelby Moore wrote:
...the generative essence, which is that granularity of types is what determines the tension.

Afaics, the importance of typing is to enforce semantic bounds at compile-time (i.e. locality of concerns), to avoid proliferating run-time "exceptions" (misbehavior of any degree)...

...while getting rid of virtual inheritance on non-abstract classes avoids the Liskov Substitution Principle problem...

Let me show the derivation of those above assertions.

LSP states that a property is inherited for all subset(s) when the inherited property is provable for the superset.

LSP thus states it is generally undecidable that subsets inherit semantics. This is due to the Linsky Referencing principle which says that it is undecidable what something is when it is described or perceived, i.e. the static typing compiler can enforce the relationships between the declared types-- not algorithms applied to the variants of the interface. Thus, type of a method function (aka interface) is proven to inherit at compile-time by the static typing compiler, but the semantics derived from that inherited interface type are undecidable/unprovable.

Shelby Principles on LSP


  1. In order to strengthen the semantic design contract, it has been proposed to apply preconditions and postconditions on the variants of the interface. But conceptually such conditions are really just types, and can be so in practice. Thus, granularity of typing is what determines the boundary of semantic undecidability and thus given referential transparency then also the boundary of tension for reusability/composablity. Without referential transparency, granularity increases the complexity of the state machine and this causes the semantic undecidability to leak out (alias) into the reuse (sampling of inheritance) state machine (analogous to STM, thread synchronization, or other referentially opaque paradigms leaking incoherence in concurrency). Coase's theorem (i.e. there is no external reference point, any such barrier will fail) predicts the referencial dependency failure, which is really just the Shannon-Nyquist sampling theorem (i.e. aliasing occurs unless one samples for infinite time and infinite granularity in space-time, Nyquist limit can't be known due to Coase's theorem) and the 1856 law of thermodynamics (i.e. entire universe, a closed system thus everything, trends to maximum disorder, where disorder means maximum possibilities or granularity).

  2. Thus I disagree with current (afaik) state-of-the-art in literature that claims LSP allows that interface arguments inheritance are contravariant and return values are covariant. Afaics, they must be invariant in order to inherit the same semantics on the interface, unless the variance is between 100% abstract (no mixed semantic implementation) types. What someone was probably thinking is that those rules for variance on the interface inheritance enables the subtype to fulfill LSP, where the property that holds true for the superset (supertype) is that each member (subtype) obeys the order of the declared inheritance hierarchy, but that is a very weak property (semantics of order of hierarchy doesn't do anything to enforce semantic behavior of interface, e.g. inheritance order does not prevent a subtype CIntersection from silently ignoring duplicate adds to a supertype CUnion, whereas a boolean return value for success does). Note, differentiate between interface inheritance and invocation. The invocation of an interface allows the inverse of such variance, but that is an unrelated issue (if the interface inheritance is invariant and thus LSP correct).

  3. For each supertype method that does not have a semantic implementation, by definition it is impossible for it to semantically deviate from its implemented subset (surjectively, but the subset members can deviate from each other), thus can be invoked as a virtual method with semantic type-safety (without violating LSP) when it is called on a compile-time (aka statically typed) reference to that supertype (even though the reference points at run-time to a subtype with an implementation of that method, because an abstract type can not be instantiated). By definition, any type that has an incomplete implementation of its interface(s), is an abstract type. Note for a method that has an implementation, then it is not semantically type-safe (violates LSP per my prior paragraphs) if called virtually on a compile-time reference to the type (even abstract) that contains that implementation. Thus perhaps one can make a good rational for not mixing implementation in an abstract class, so if abstract class names are transparent to the programmer it is always clear when semantics is undefined at compile-time and virtual at run-time, and thus to force separation (granularity). One of my critical general design rules (to everything, including social science) is that implicit should never be opaque, meaning implicit constructs should never create hidden ambiguity.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Concurrency: How we will program for computers with 1000+ processors

Post  Shelby on Tue Jan 18, 2011 4:19 am

http://lambda-the-ultimate.org/node/4182

Shelby wrote:
My data structure for Steele's word splitting example is, where enum is an algebraic type (which is just a syntactical sugar for class inheritance), e.g.

Code:
enum Segment
{
  char( s : String )
  undelimited( left : Segment, right : Segment, words : Array<String> )
  delimitedLeft( left : Segment, right : Segment, words : Array<String> )
  delimitedRight( left : Segment, right : Segment, words : Array<String> )
  delimitedBoth( left : Segment, right : Segment, words : Array<String> )
  function combinator( left : Segment, right : Segment ) : Segment
  {
      // Insert logic here to return a new Segment that combines left and right
      // words will contain incomplete words on one or both ends of the ends of the array unless Segment.delimitedBoth
  }
}

The input is Array<Segment>, probably Array<Segment.char> and the covariant substitution is legal because Segment.combinator is referentially transparent. Then map reduce this over Segment.combinator. If Segment.combinator is associative, call an associative version of map reduce. Ditto if Segment.combinator is commutative.

I am also thinking that for example, images should be defined by a recursive hierarchical algebraic type that has resonant locality in the 2D space (e.g. to JPEG 8x8 DCT blocks), instead of non-resonant locality in the 1D space of an Array<Color>, e.g.

Code:
enum Image
{
  pixel( c : Color )
  block( topLeft : Image, topRight : Image, btmLeft : Image, btmRight : Image )
  function combinator( topLeft : Image, topRight : Image, btmLeft : Image, btmRight : Image ) : Block
  {
      return new Image.block( topLeft, topRight, btmLeft, btmRight )
  }
}

Note the Image.combinator is associative, but in a 2D sense, thus we need a 2D version of map reduce. Map reduce is the constructor for the instance of a recursive hierarchical algebraic type.

Note Image could be parametrized on type of a Image.pixel instead of married to Color.

http://lambda-the-ultimate.org/node/4178#comment-64108

Shelby wrote:
Compiled vs. library?

Afaics, the key premise this research hinges on, is that some important higher-level domain specific portions of optimizations must be done at compile time.

However, if it turns out that the optimizations are really just lower-level general optimizations such as optimizing matrix math, combined with remapping data structures from accumulator and/or random access, to recursive, hierarchical, with associative functional map reduce constructions (see my post at bottom of that link, no direct link yet, as post is awaiting moderator approval), then it may turn out that domain specific optimizations are just run-time libraries that enforce certain data structures.

My intuition leans towards Tim's outcome being most likely, because of economy-of-scale, and tackling the lowest hanging fruit first. Nature prefers the multifurcating tree of possibilities, because it is the most economic (fluid dynamics through pipes). Examples include the internet physical network, and the human brain.

Can anyone provide examples of domain specific optimizations that must be done at compile-time and couldn't be restructured in libraries of the paradigm of concurrent data structures that I linked about above?

P.S. afaik parser combinators (non-predictive recursive descent) don't prove lack of ambiguity in grammar, because the global search of Firstk and Followk sets has not been enumerated.


Last edited by Shelby on Sat Feb 19, 2011 1:21 pm; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Ehud Lamm, have you censored my 2 latest posts at LtU

Post  Shelby on Wed Jan 19, 2011 1:12 am

Mr Ehud Lamm (editor of "LtU" site),

It has been 24 hours since I submitted at your LtU site, two insightful posts on the future of theoretical data structures for parallel computing, I have made a copy of my posts at two blogs:

http://goldwetrust.up-with.com/t112p90-computers#4061

Those posts have not appeared at your site. I request an explanation of why the posts have apparently been censored?

I don't really know how to express this, so here it goes as best as I can within a few minutes that I have to allocate to this...

Perhaps you don't understand that those two posts are about the theory of the future of data structures. I was not arguing about design issues, but rather pointing out that algebraic types can be used to create data structures that are resonant with concurrency. I was pointing out that Steele's explanation of the data structure cluttered that understanding. Steele's introduction of a middle chunk that could contain both delimiters and words, was attacking the problem from the top-down, which is the wrong way to think about concurrent data structures. Concurrency is a bottom up map reduce construction. That is a very fundamental point.

It is like this. I was trying to participate in your site, to see if there was an insight I could gain from others about my ideas. I felt others might a gain from my insights and I might gain from their responses. I felt I was giving up some of my key commercial insights, but I felt it was worth it in the spirit of better outcomes for mankind.

However, I will just proceed without the benefit of sharing. It is going to quite ironic if Copute becomes extremely popular and is acclaimed for solving some key issues in theoretical programming, and then I point out that Ehud and LtU was censoring my attempts to share my insights with other researchers.

God gave you a talent which is at least your key site and positioning in the field. As you know from the Parable of the Talents, if you misuse the talent, it will be take from you and given to someone who is able to better use the talent.

LtU is a very useful site, and I wouldn't propose to say that all of your talent is being wasted. However, I also see that LtU is in some respects a lot of noise with very few key generative essence realized. This is where my IQ really stands up.

Again, if I have simply misunderstood and I am wrong in some aspect, I would appreciate hearing it. I don't want to remain ignorantly overconfident. Shoot me down, if you can. Please.

Thanks,
Shelby


Last edited by Shelby on Thu Jan 20, 2011 3:15 am; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Ehud graciously replied, so I replied again

Post  Shelby on Wed Jan 19, 2011 6:15 pm

> Dear Shelby Moore,
>
> I should make it clear from the start that posting on LtU is not a
> right but a privilege? and that posting is entirely at my discretion.
>
> You have been posting very frequently recently, and posting very long
> messages. These message do not seem to be of interest to the
> community, and they have not lead to any replies as of yet. Your two
> posts that are being held for moderation are way too long based on the
> best practices of the site as I judge them. If and when I will allow
> them to appear will be based on my judegment of their interest to the
> community.
>
> Given the significance you precieve in them, I urge you to try to
> publish your results in peer-reviewed literature. LtU is certainly not
> aimed at publishing new results (feel free to consult the policy
> document). If you do not wish to go this route, there are many ways to
> self-publish on the internet.
>
> Best regards and good luck,
> Ehud

Dear Ehud Lamm,

Thank you for the reply. I should reciprocate with my honest reply.

For the public record, I will enumerate your accusations, as we are called to bear witness:


1. I have posted 4 times between Jan 6 and Jan 17:

http://lambda-the-ultimate.org/tracker/7621

If you had not censored my 2 posts, that would have been 6 posts between Jan 6 and 18, which is a whopping 0.5 posts per day. Wow that is really too many?

Yet, we see you do not censor others who have posted much more frequently such as yourself with 12 posts between Jan 6 and 18:

http://lambda-the-ultimate.org/user/1/track

And many other examples, such as this user that posted 17 times between Jan 6 and 18:

http://lambda-the-ultimate.org/user/6002/track

Even this one page alone has for numerous posts per user, by numerous users per day, in which the real-time dialogue also evidences that you don't have them on moderation as you do me (so that adds circumstantial evidence that it is something personal against me):

http://lambda-the-ultimate.org/node/4176



2. My first post received a reply, and I rebutted it with my 2nd post, and there was no more replies because my rebuttal was irrefutable fact:

http://lambda-the-ultimate.org/node/735#comment-63943

So how can you assume that my posts are not generating interest in the community, when 2 of my posts did obviously (surely the person I rebutted would have rebutted me if I was not correct in the 2nd post where I rebutted him)? You have a whopping sample size of 4 posts, with 2 posts demonstating community interest. I assume you know what standard deviation means, so then I can only assume that you are being intentionally facetious to the extreme.

Notwithstanding the statistical void of a sample size of 4, of which my demonstrated community interest was 25 - 50%, how did you measure that my 3rd and 4th posts were not so overwhelmingly accepted a fact, that the community appreciated them but did not see a need to comment further? Did you have non-public discussions?



3. You published 2 of my short-to-medium size posts and 2 of my longish posts. I see that others have made long posts at times. I also see that the 2 posts you censored were shorter than the 2 longish ones of mine that you did not censor. Also 1 of the posts you censored was very short and the shortest all of my 6 attempted posts from Jan 6 to Jan 18:

http://goldwetrust.up-with.com/t112p90-computers#4061

If the length of my posts is/was the issue, it would be very simple for me to edit my posts (if they were posted) and provide a link to the same information off-site, and provide only a terse summary at LtU. I have no extreme need for my posts to appear at LtU, I won't force the issue where I am not wanted.



4. About the privilege versus right issue, I have read the public FAQ and policy documents, and I see I have not violated the policies, nor afaics do any of your accusations hold any objective truth, so I assume you saying that privilege is entirely arbitrary based on your personal judgment of a person's qualities other than the objective quality of their contribution. So in other words, it seems you are implying that LtU is not a professional site (contrary to the specific claim that it is for professionals in your FAQ and policy documents), open to objective peer review, rather it is private club for subjective, closed opinion.


http://lambda-the-ultimate.org/faq

"Your contributions are welcome, in the form of questions, announcements etc. However, abusive or off-topic posts will be deleted immediately."

"Keep in mind that LtU is a community site and regular and respected members are expected to let posters know when their posts violate the spirit of LtU. If you receive responses of this sort, it is firmly suggested that you review your contribution, and accept that your style of discussion or choice of topic may be inappropriate for this site. Rest assured that this community moderation will not be used casually. In the unlikely chance that you feel this happens, and this somehow goes unnoticed by the community at large, feel free to let me know how you feel and any other concerns you might have.

I'd be happy to have many folks contributing to the site, so if you read LtU regularly, participate in the discussion group and are interested in becoming a contributing editor and posting items to the homepage - just let me know."


I did not see any community members giving my current username any negative responses about my style of contribution. I assume you believe in the Jubilee, forgiveness, or the ability of people to learn and adjust.


http://lambda-the-ultimate.org/policies

"LtU is foremost a place to learn and exchange ideas. The LtU Forum is not a debating forum for advocacy, posturing, attacks, vendettas, or advertising. It is a forum for informed professional discussion related to existing work.

Your contributions are welcome, subject to the policies described below. Abusive or off-topic posts will be deleted immediately. Posting here is a privilege, not a right.

Note that these policies were developed mainly to help new members understand the site, and to help maintain a high quality of discussion."



5. Please don't talk to me facetiously as if I am child, which I am obviously not. Considering I wrote a web page publishing tool (Cool Page) 13 years ago, which over a million people used to publish their own web sites, I certainly know how to publish my ideas, and I had already informed you that I had already done so to two blogs.

Honestly I really don't care if you publish my posts, it is more of point about principle. I told you before that I was doing it out of the unselfishness of my heart to share with others for their benefit, with the possibility for me to get the benefit of feedback.

I urge you (not facetiously) to revisit your Torah and the values you were I assume taught but maybe have forgotten (but I can't know or judge because I am not inside your mind and heart):

http://www.torah.org/learning/jewish-values/archives.html

For example, we must judge fairly. We must not steal (or waste) the time of others. Again referring to the link above, it talks about "Returning lost objects". My time and posts are lost. "Honoring others", "Distancing yourself from falsehood", etc..

I hereby apologize for my sins and ask for forgiveness. I probably don't fully know to what degree my intentions were imperfect, but I do feel I had a genuine desire in my heart to share and entertain synergy for good.

I can not judge you sir, so let us end here. All the best,

Shelby

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Copute

Post  Shelby on Fri Jan 21, 2011 3:50 am

C-like, Cooperative, Composable, Concurrent, Cool, Cha-Ching

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Fundamental outline of Copute

Post  Shelby on Fri Jan 21, 2011 10:53 am

http://copute.com/dev/docs/Copute/ref/function.html

Without the powerful static typing and pure function options, Copute is essentially the same grammar as JavaScript.

The Copute language is composed of five fundamentals, type, instance reference, expression, function, and imperative scope.

  • Type is declared by a class statement, enum statement, inseparably in a function (instance construction) expression, or the identifier associated with the aforementioned declarations (when not anonymous).
  • Instance construction is declared by a class or enum constructor call expression, a function expression, or literal class expression-- all of which return an instance reference.
  • Expression constructs an instance or operates on instance reference(s), and returns an instance reference or void.
  • Functional programming is a function call expression, that may optionally nest (a hierarchy of) function call expression. A referentially transparent (aka pure) function is re-entrant, stateless, partial evaluation agnostic, and thus composable (aka reusable).
  • Imperative (aka stateful, or state-machine) programming is an ordered sequence of expres​sion(s). Each imperative sequence (in a nested hierarchy of them) is an identifier namespace (aka scope)-- a means to granularize the referential opacity. A statement is a grammatical unit which forms an expression that returns a type of void.

A Copute source code file is implicitly wrapped in an anonymous function that is called at initialization.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Steven Obua just described my Copute project, very substantially

Post  Shelby on Sat Jan 22, 2011 5:07 pm

http://lambda-the-ultimate.org/node/4182#comment-64249

I will have a look at Steven Obua's current work.

I will be emailing the following to Steven Obua.

Okay in 20 minutes, I reviewed his new computer language Babel-17, his recent research paper, and the expert criticisms he received:

http://arxiv.org/PS_cache/arxiv/pdf/1007/1007.3023v1.pdf
http://phlegmaticprogrammer.wordpress.com/2010/11/21/response-to-reviews/
http://phlegmaticprogrammer.wordpress.com/2010/11/21/reviews-for-purely-functional-structured-programming/

He is aiming to achieve referential transparency (pure) in an structure language style, that looks a lot like the imperative (stateful) code that is familiar to many programmers, but is actually selectively pure externally, by selectively not allowing references to see the external scope. He accomplishes this by "shadowing", which means hiding an external scope reference by declaring another instance with the same identifier in the local scope. This means some references could see still the external scope if they were not also hidden, thus his design is very granular in that respect (but I think granular in an undesirable way if not coupled with some additional semantics, as I will explain below). JavaScript has this selective hiding capability now, and so does the design of Copute. As the expert reviewers point out, this is not a new concept. The hard part is how to get programmers to use it for purity.

His marketing objective is related to mine, in that he wants to enable the integration of structured imperative programming with pure functional programming, in a more familiar and "less mathematical" (more intuitive or natural) semantics than the Haskell monad (for the average non-mathematical programmer).

However, I think he falls far short of what I am doing with the design of Copute. Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not. It is the composition of pure functions that enables reusability to scale. This is the same criticism I make against the Haskell monad, where implicit typing and the ad-hoc polymorphism typing system allow any type to cross-pollute another, means that there are no concrete, explicit boundaries on purity (and on semantics and thus type-safety in general). I wrote about that at the following link.

http://copute.com/dev/docs/Copute/ref/class.html#Inheritance

It is really the typing system that enables scalable composability (his Babel-17 is not typed, so it is hopeless as the expert reviewers point out), and this is why Copute puts so much effort into getting the purity rules for type variance (inheritance) correct, and type is how we will parallelize our future (note the post at the following link was censored from LtU).

http://goldwetrust.up-with.com/t112p90-computers#4061

So in summary, he is correct that Scala missed the boat on purity (but Scala creator Odersky has stated that this is because of the challenge of integrating with the rest of Java world), but we need the correct typing system in order to achieve parallelism. I think Steven Obua is starting to realize that, but he is still delegating to inference in the compiler instead of typing, which is incorrect because the programmer has to think in terms of naturally concurrent data structures:

http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/
http://lambda-the-ultimate.org/node/4182#comment-64170
http://lambda-the-ultimate.org/node/4182#comment-64227

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

re: Steven Obua just described my Copute project, very substantially

Post  Shelby on Sun Jan 23, 2011 9:50 am

http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-92

Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn't want to create Copute (and I don't even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don't even think I am the most qualified to do it (I'm historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don't want to make a design error or waste my effort "barking up the wrong tree".

If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can't assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.

If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?

My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):

http://www.mail-archive.com/haskell-cafe@haskell.org/msg66898.html
http://www.haskell.org/haskellwiki/Phooey

FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn't different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).

So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes "quick and dirty" is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible...and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).

Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn't granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable-- in short, it can't scale in a wide area composition. It is sort of analogous to C++ "const" blunder, which had to propagate every where in order to be used any where.

Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).

Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.

===========
ADD: Coase's theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Copute as a startup...

Post  Shelby on Mon Jan 24, 2011 7:46 am

Yesterday was an interesting day, because I was excited to be getting some exchanges with people who do the kind of work I do, and thus can challenge, inspire, interact with me on that intellectual level.

The emotional reaction was I believe to want to as quickly as possible find a way to get some such people to work together with me, because it would be immensely fun, exciting, and productive. I think in large part, that is why these guys work in the Silicon Valley-- for the entire social aspect of the synergy.

So right there that probably kills any chance of others working with me at this juncture, given I am in the Philippines and have no desire to go work in the Silicon Valley (or any other tech center such as San Antonio, etc).

But as I got to thinking more about the economics of what I am doing, I realized that I probably shouldn't be paying any one a dime. The reason is because what will make this fly is it being open-source, which means people contribute because they know they own it, in that they can use the sum of the work any way they wish to, now and into the future.

So the only way to get an open-source project rolling with contributions, is to first deliver an initial product which is useful enough, that people start contributing because they need some aspect of what is already there, combined with something else they need.

So it is all about need. That is key.

Also I don't think you will get anyone to contribute to open-source if they think you are going to charge for access. So I think it is very key to make it clear that the model for the language is no charge for access. It has to be stressed that I own the Copute.com domain, and may provide an optional way for developers to monetize their efforts, but that it is entirely optional marketing side-show, and the Copute itself is open-source, public domain, and not owned by any one. No strings attached.

So I think the correct time to bring in investors, is when we go to launch the Compute.com monetization engine, which has to come after the Copute language is done and already generating significant contribution.

So this means, I am on my own for the time-being. If anyone joins to help me at this stage, it will be a gift from God, because it would take someone LIKE MYSELF who is utterly convinced of the importance of Copute and wanting to dedicate themselves to it, without any certainty of financial gain.

I don't think I am likely to find another person like myself. I got a little bit inspired to read Joseph Perla's blog and realize there is a bright young man who shares some of my philosophy (but not all). But there is still a big gap between that, and being LIKE ME, with respect to Copute.

Of course we know what happened the last time I got inspired about a young programmer, Nicolas Cannesse, because I was admiring his work on HaXe, but it turned very bitter when he shot down every idea I had about improving HaXe and banned me from his discussion group mailing list. However, Copute is in large part influenced by HaXe, so my tribute to Nicolas is implicit. So it is not bitter after all. I even wrote in private to others, that I don't have to be frustrated with Nicolas, I wish him the very best.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Realized Haskell vs. Compute are fundamentally equivalent in power, except for...

Post  Shelby on Mon Jan 24, 2011 11:50 am

Copute has one key fundamental advantage:

http://code.google.com/p/copute/issues/detail?id=39#c2

Plus, Copute has a more intuitive syntax for imperative programmers (the bulk of programmers):

http://code.google.com/p/copute/issues/detail?id=39#c3

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

MILESTONE: published the _complete_ verified LL(2) grammar

Post  Shelby on Mon Jan 24, 2011 9:03 pm

This is completed:

http://copute.com/dev/docs/Copute/ref/grammar.txt

Onward to writing the compiler.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Why Copute won't support catching exceptions

Post  Shelby on Mon Jan 24, 2011 10:11 pm

re: Steven Obua just described my Copute project, very substantially

http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-99

Agreed Go is aimed towards systems programming and isn't an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for "designer languages"), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.

The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.

Code:
raise First + raise Second handle First => 1 | Second => 2,

what is the value of this expression? It clearly depends on the order
of evaluation.

Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)

If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.

Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.

==============
ADD: There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.

I can see how you get the elegant determinism by basically adding a "null" test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.

I see no problem using exceptions when there is no function call inside the try block.

If I have made a wrong assumption or erroneous statement, I apologize in advance.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

More on composability and exceptions

Post  Shelby on Tue Jan 25, 2011 8:35 am

Still "talking shop" with the Steven Obua.

http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-103

I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you've since clarified for me (thank you), that you are using a Maybe type, and you've hidden (made implicit) the monadic action in a try-catch abstraction.

Agreed, exceptions can be concurrent if they don't employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.

So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).

The second point I was making is that static typing is critical for composability.

With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.

How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. Is there another way to deal with it that I am not aware of?

For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).

[1] http://lambda-the-ultimate.org/node/1551#comment-64186


=======================================
=======================================

http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-105

Agreed I definitely want to try to avoid subjective injection (because it won't help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.

Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one's statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).

Isn't the objective fallacy of arguing against static typing, that the alternative isn't better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele's example. Documentation is not an argument against static typing, as it is needed in any case.

Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn't a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.

There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C "const" blunder being one of the infamous painful examples (I've been hopefully careful to not repeat this "const" in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. "const" could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.

Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don't see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn't even do the check). If the caller already had a NonZero type, they don't need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc "solution" (perhaps that wasn't reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).

After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.

Are there any more objective observations we can make on this issue? Any corrections?

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Shocking Java comparison

Post  Shelby on Fri Jan 28, 2011 3:12 pm

I was shocked how many things Java can not do (well), that Copute proposes to do:

http://copute.com/dev/docs/Copute/ref/intro.html#Java

This list became more exhaustive after researching how I might compile Copute code to Java source code-- it actually can probably be done, but the Java code will be ugly and bloated, almost impossible to follow the semantics.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Some HaXe issues

Post  Shelby on Thu Feb 03, 2011 9:27 pm


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Eliminated exceptions from computer language!

Post  Shelby on Fri Feb 04, 2011 12:07 am

Wow! I was able to eliminate the assertion exceptions entirely:

http://code.google.com/p/copute/issues/detail?id=42

Remember the prior discussion with the creator of Babel-17, well afaics, his point is entirely void now

I think this is a major breakthrough for computing. Absolutely, fundamentally important.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Attempt to shutdown the internet, failed.

Post  Shelby on Fri Feb 04, 2011 4:07 pm

As I had predicted, because it only takes a trickle of truth to route around captive markets (Coase's Theorem again, i.e. 2nd law of thermo, the low entropy state will always seep away into maximum disorder, i.e. possibilities)

http://edition.cnn.com/2011/TECH/web/02/03/internet.shut.down/index.html?hpt=Sbin

"If you really wanted to turn off the global internet, you'd have to seek out people on every continent and every country," said Cowie from Renesys. "The internet is so decentralized that there is no kill switch."

"No you can't do that," said Harvard's Faris. "The internet is designed to be robust. Certain links break and then other links are opened."

In Egypt, for example, people who couldn't access the broadband internet were able to place international phone calls to Europe to log on to dial-up internet service, he said, which, of course, operates on phone lines.

Google even announced a service that would let people in Egypt use landline telephones to post to Twitter using voice messages.

"Communication continues and people revert to other modes,"

Use your WiFi router as a wireless multi-hop network hub:

http://www.zerohedge.com/article/how-maintain-internet-access-even-if-your-government-turns-it

SRSrocco, technology is imploding due to "complexity" as you had predicted? No. That is because you did not understand that increasing possibilities is not complexity, it is actually less complex, because there are fewer binding futures contracts and more freedom for the market to anneal to dynamic situations.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Facebook currency

Post  Shelby on Fri Feb 04, 2011 7:48 pm

http://www.marketoracle.co.uk/Article25982.html#comment100122

Shelby wrote:The currency has to be created. If it does not represent an exchange between tangible money, then it means who ever created it, is like a central bank, stealing that value and causing inflation as it spends it into the the ecosystem.

So in essence what Facebook has done is admit that their business model is an economic failure:

http://www.jperla.com/blog/post/facebook-is-a-ponzi-scheme

Now they will take down all the game developers with them, because their system could not sustain economic viability with a free market of game developers charging how they wished.

One thing is a virtual currency is a way for people in the developing world to earn income on Facebook and avoid the issues of money transfer and transactional costs. But the problem is that if Facebook doesn't back this currency with gold or silver, this it will just be stealing value via inflation and ripping off the entire ecosystem, eventually bringing the whole thing down.

The genesis of Facebook's demise will be decentralized social network. It is coming. The Napster or Gnutella version of Facebook is coming. I should know. Hint.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Scala 2500% growth rate of job listings at indeed.com

Post  Shelby on Sun Feb 06, 2011 5:27 pm

I can't until a year or 1.5 years from now and we do the search on indeed.com for both sets of jobs advertised

You entirely missed the point of why I posted about Scala's 2500% (and accelerating!) growth rate.

The point is that it shows that there is a potentially huge demand for a Copute-like breakthrough in language design.

One of the first things a marketer has to determine is if there is a high-growth unmet need or market niche:

http://www.scala-lang.org/node/3272
http://stackoverflow.com/questions/1104274/scala-as-the-new-java
http://stackoverflow.com/questions/1108833/should-i-study-scala

Of course, it is possible that once Scala saturates the hard-core early adopters, then its growth will falter, because Scala is very hard to learn ("kitchen-sink" of features, with unfamiliar syntax):

http://www.google.com/search?q=scala+criticism
http://creativekarma.com/ee.php/weblog/comments/my_verdict_on_the_scala_language/
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
http://stackoverflow.com/questions/3112725/advantages-of-scalas-type-system/3113741#3113741
http://stackoverflow.com/questions/1025181/hidden-features-of-scala

Imo, Scala suffers from trying to be too general and thus allowing too many ways to do the same thing, and thus requires the programmer to know all of it, in order to read the code of others (i.e. no single-point-of-truth simplicity). I am consciously trying to limit Copute's syntax and keep it familiar (C-like), to only paradigms that provide the necessary generality and unifying orthogonal paradigms.

If you want wrap your head around how knowledgeable I am in this field, take for example this page (and I have years and millions of lines of code experience in assembler, C, C++, PHP, etc too):

http://stackoverflow.com/questions/61088/hidden-features-of-javascript

I think you'd be hard pressed to expose an average C# programmer to all of it in one place if not for SO. It'd take years of playing with it to come up with the same hard won list. – Allain Lalonde Sep 14 '08 at 18:54
7

I've been writing JavaScript professionally for 10 years now and I learned a thing or three from this thread. Thanks, Alan! – Andrew Hedges Sep 20 '08 at 7:39

Well I knew everything on that page, and even I can correct some mistakes (I knew this years ago):

@Vincent Robert: please note that arguments.callee is being deprecated. – ken Dec 29 '10 at 21:50

Wrong, it is arguments.caller that is being deprecated.

https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope/arguments/caller
https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope/arguments/callee

A few comments on Scala:

http://creativekarma.com/ee.php/weblog/comments/my_verdict_on_the_scala_language/

The complexity of functional programming is perhaps a bit easier to explain. It’s certainly possible to write nearly conventional code in Scala. Here, for example, is Scala code for a conventional way to sum the elements of a list of integers:
Code:
    def sum(l: List[int]): int = {
        var result: int = 0
        for (item <- l)
            result += item
        return result
    }

But that’s not “the Scala way”. A good Scala programmer is expected to use this instead:
Code:
    def sum(l: List[int]): int = (0/:l){_+_}

Isn’t that lovely? This code (that appears to be line noise) is probably the most classic example of a catamorphism (a data transform that results in less data out than in). Basically it says, initialize the result to 0, then go from beginning to end through l, and for each item compute the new result to be result+item. Oh wait, isn’t that exactly what our earlier code did? Well, um, yes, but… this way doesn’t use any of that evil nasty mutable state. This code might be darned near illegible to normal people, but it is the pinnacle of purity and virtue in the FP world.
Oh… if you’re one of those people who worries about efficiency, you do not want to know what that tiny little bit of code expands to.

Let me say that I like immutable data and immutable data structures. I wrote an article here about immutable data in Java. One of the things that attracted me to Scala was its full support for immutable data and immutable data structures. But dagnabbit, mutable variables within a method (or function) aren’t a crime and won’t hurt anything when they’re allocated on the stack as is done by the JVM. This mutable vs. immutable fight has been going on since Turing and Church. Pretty much 100% of all actual, working, useful programs have been written with mutable state. So FP programmers, just get over your aversion to it, okay?

Although I agreed with much of what that author wrote, and I also agree with him above that Scala FP syntax is cryptic and there are too many ways to write the same thing in ever more cryptic ways (e.g. the use of '_' + 1 instead of (x) => x + 1 for lambda/anonymous function), I still must point out that the author is wrong about the importance of using FP within functions, because although this doesn't impact the immutability of the containing function, it does impact whether the inner code is concurrency agnostic and can be parallelized on N cores.

Copute won't even allow "for" loops (while loops are available for diehards), but it will have a much more sane syntax for FP expression.

And finally I end this post with some humor from the comment section of that prior link:

Code:
(0/:l){_+_} !!!!

I’m not one to disparage a languages syntax for being too heavy on the symbols, but do you guy realize that looks like a smily version of goatse guy wearing a hat?


Last edited by Shelby on Mon Feb 07, 2011 1:46 am; edited 5 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Real risk I have on Copute is technical, not marketing risk

Post  Shelby on Sun Feb 06, 2011 7:26 pm

This is why I am spending so much effort on the design stage now.

Because even if no person ever used Copute, it would be worth building it just for my use, because I plan to write another million lines of code in my lifetime, and Copute can make it order-of-magnitude more productive. The existing languages are bad enough, that it is actually worth a year of time to fix them even if I become the only user of Copute. However, realistically it will take more than a year, especially to get a good IDE and debugger done (so this is causing me much contemplation).

And if I do write a million lines of code using Copute, that nearly insures that it will become popular, because others will start using my libraries of code and they will be using Copute when they do.

So the main challenge is whether Copute is technically correct (a true order-of-magnitude gain, or a fallacy?), and timing to get there before someone else makes Copute unnecessary.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Every 10 years we need a new programming language paradigm

Post  Shelby on Sun Feb 06, 2011 8:38 pm

http://creativekarma.com/ee.php/weblog/about/

In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.

In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.

Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”

The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.

About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Copute influenced by

Post  Shelby on Mon Feb 07, 2011 12:41 am

The Wikipedia page for each computer language, lists the languages that it was influenced by.

Copute influenced by in chronological ascending order:

C++
PHP
JavaScript
HaXe
Haskell
Scala

Copute influenced by degree of importance/influence descending order:

HaXe
Haskell
Scala
JavaScript
PHP
C++


P.S. I misread the charts at indeed.com are "percentage growth", not "percentage rate of growth". Thus the 2500% growth of Scala jobs and -50% loss of Cobol jobs, is cumulative, not a variable rate of growth (i.e. distance not 1st derivative = velocity). Looks like Scala's 2500% growth was (afair) primarily in the past 2 years, so it is still an astronomic rate, if sustained, but in any case, the # of years would be significantly more than I computed previously. Also the rate of growth may be slowing already (I would need to back and study the chart carefully).

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Doug Pardee rebuttal

Post  Shelby on Mon Feb 07, 2011 2:04 am

http://creativekarma.com/ee.php/weblog/comments/death_to_the_liskov_substitutability_principle/

The problem is not to reuse monoliths but to make interfaces granular enough to reuse the components of the monoliths.

http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/

Although I agree that to some extent Scala has some syntax and paradigm complexity overload, afaics you missed the key point that although imperative code does not impact the referential transparency (mutability) of its containing function, FP code is required to obtain parallelism:

http://goldwetrust.up-with.com/t112p90-computers#4061

I think much of Scala's perceived complexity is the way Scala has been explained so far has been very obtuse, the syntax is just different enough from the C++/Java genre to make it hard to read (takes the mind years to become as comfortable reading a new syntax as it did learning to drive a manual transmission), and many paradigms in Scala are not unified (e.g. lazy vals and by-name parameters are really both just automatic function closures). I am working on improving these issues for Copute and adding referential transparency.

http://copute.com/dev/docs/Copute/ref/intro.html


Last edited by Shelby on Sat Aug 06, 2011 1:14 pm; edited 4 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Compare explanations of mixins

Post  Shelby on Mon Feb 07, 2011 11:19 am

My explanation:

http://copute.com/dev/docs/Copute/ref/class.html#Inheritance

Creator of Scala's explanations:

http://www.scala-lang.org/node/117
pg 5, 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005 (note Copute reverses the inheritance list order).

Which one is more complete, concise, and comprehensible? Don't answer in this thread please.

Here is a worse explanation from out of on the web:

http://debasishg.blogspot.com/2006/04/scala-compose-classes-with-mixins.html

P.S. I post to my goldwetrust.up-with.com forum, so I have copies of my thoughts to refer back to, so I won't lose the information. It has nothing to do with wanting to target a programmer audience of readers. I don't want a lot of questions and attention from programmers right now, that will just slow me down with their confusion about what I am trying to design. I get plenty enough design input by reading voraciously what others have done and research papers.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

thoughts about the Halting problem with respect to Copute

Post  Shelby on Tue Feb 08, 2011 10:51 pm

I think they will find flaws in my work, but hopefully it will only be flaws that I know exist, because they are matters owned by God, e.g. the ability to predict the future. This is why the Halting Theorem says that it is "undecidable" whether a program will ever halt. It means we can't know the answer. Examples are very simple programs called Cellular Automa. They are very simple rules, maybe only 2 or 3, yet the only way to know what value they will create on the trillionth iteration, is to run them a trillion times. We don't know if they will ever halt or reduce the same pattern. Some produce a repeated pattern for a long time, then start producing another pattern. They have patterns within patterns within patterns, that are revealed only as you run them longer.

We can prove that some programs halt, and that is because they are not Turing complete logic machines. They are limited in what they can express.

My job is now is to give the language a way to express that the futures contracts do not exist in portions of the program. Once we isolate where the futures contracts are, we can isolate those portions that can not be composed freely and will always have bugs and will be "undecidable".

P.S. Remember my work where space-time is just a perception. The the mimic octopus illustrates how we can be fooled by our space-time senses.


Last edited by Shelby on Sat Feb 19, 2011 12:36 pm; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

View user profile http://GoldWeTrust.com

Back to top Go down

Re: Computers:

Post  Sponsored content


Sponsored content


Back to top Go down

Page 5 of 11 Previous  1, 2, 3, 4, 5, 6 ... 9, 10, 11  Next

View previous topic View next topic Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum