Computers:

Page 10 of 17 Previous  1 ... 6 ... 9, 10, 11 ... 13 ... 17  Next

View previous topic View next topic Go down

Constructors are considered harmful

Post  Shelby on Mon Feb 14, 2011 12:44 am

http://gbracha.blogspot.com/2007/06/constructors-considered-harmful.html

Shelby added a comment:
Gilad is making the correct point that in best design practice composable modules (i.e. APIs) should expose abstract interfaces but not their concrete classes. Thus new becomes impossible, because there is no concrete class to instantiate. Apparently Gosling realized this.

Static factories in abstract interfaces can accomplish this with type parametrization.

Gilad Bracha wrote:The standard recommended solution is to use a static factory [...] You can’t abstract over static methods

Static methods can be abstracted (over inheritance) with type parametrization, e.g.

Code:
interface Factory<+Subclass extends Factory<Subclass>>
{
  newInstance() : Subclass
}

where the + declares that Factory can be referenced (assigned) covariantly due to Liskov Substitution Principle. The + is unnecessary in the above example, but exists in the syntax generally to perform checking against LSP.

Thus any API can abstract over any publicly exposed Factory<T> by associating them privately with instances of Factory<IheritsFromT>. In other words, our interface could have been as follows, where the factory creates a new instance of itself, which can be any subtype because inherited return types may be covariant.

Code:
interface Factory
{
  newInstance() : Factory
}

On the topic of concrete implementation inheritance and constructors, the Scala-like mixins can not handle this, because external parameters for constructors are not allowed in a trait implementation. Thus each mixin is detached from the external inheritance order. I have taken this a step further in my design for Copute, because I force the separation of purely abstract interface and purely concrete mixin implementation, thus every mixin has to inherit from an abstract interface and can only be referenced as a type via that interface, i.e. concrete mixins are not types in any scope other than the mixin declaration.

I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.

Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.

So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.

Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):

(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading

Here are some more relevant thoughts from Gilad Bracha:

http://gbracha.blogspot.com/2008/02/cutting-out-static.html?showComment=1221878100000#c3542743084882598768

A singleton is a simply unique object. In most languages, you can use the static state associated with a class to ensure it only has one instance, and make singletons that way. But this only works because the class itself is a singleton, and the system takes care of that for you by having a global namespace.

In Newspeak, there is no global namespace. If you need singletons in your application, they are simply instance variables of your module. When you load your application, you hook up your modules and make a single copy of each.

If, on the other hand, you need a service that's accessible to an open-ended set of users, it has to be available at some public place - this could be a URL on the internet (the real global state) or a platform wide registry. In other words, it's part of the outside world's state.

Such world state may be injected into your application when it starts up (but only in as much as the platform trusts you to access it).

Not sure if this helps. The habit of static state is pervasive in computing and it's hard for people to get rid of it - but we will.

Note, Gilad Bracha helped write the Java specification.

See also this:

http://en.wikipedia.org/wiki/Hollywood_principle


Last edited by Shelby on Mon Jun 20, 2011 9:21 pm; edited 9 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

re: The Smartphone Wars: Nokia shareholders revolt!

Post  Shelby on Tue Feb 15, 2011 2:15 am

http://esr.ibiblio.org/?p=2961&cpage=1#comment-296658

Shelby wrote (as "Imbecile" but not the same "Imbecile" who posted other comments):
Nokia must focus on its inherent strengths, which is as a refinement innovator, not a paradigm innovator a la Steve Jobs. Microsoft will slow them down, because Microsoft is strong in neither, and Windows Phone is too far behind in a race of exponential rate of innovation. Due to the exponential function, it is too late for anyone to go back and start a new smartphone OS from scratch or even take time to complete an unfinished one, unless it will offer massive compelling advantages, which is probably unrealistic. The realistic forward innovations are precisely in Nokia's area of strength. By the end of 2011, the smartphone marketshare of Nokia will have eroded to teens and Android will be triple that.

Nokia should innovate on Android so they can ship a #1 selling smartphone in 2011, and incrementally differentiate itself from the herd. It is potentially possible to co-opt Google with strategic innovations that diverge from the herd's common base. The Android platform is inherently fractured, as this is the desirable nature of open source. The opportunity is wide-open for Nokia to provide an unfractured Android platform. Popular innovations will eventually make their way back into the common base, but always on a lag-- look to Apple as a model or profitability as first-innovator.

There is no credible AppStore or iTunes on Android on the horizon. The opportunity to take the best of Android, win the race to market, and innovate are wide open. Do not fight against the exponential function. Embrace the strengths of open source, and your own strengths with respect to it-- this advice applies to everyone. Dinosaur's stand in the way of open source. Re-inventing Android as MeeGo at this stage is an enormous waste of capital, and the free market does not reward those who do not focus capital on their relative strengths. MeeGo is yet another coffin in the European culture cementary of "politics 90% of the time, to get 10% production". The institutional investors are correct that "American" (libertarian) culture of "Just Do It" wins, but the Elop and Microsoft selection are not even shadows of that.


Last edited by Shelby on Sat Mar 05, 2011 7:05 am; edited 1 time in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Why I should work on Copute, even if I never earn a penny

Post  Shelby on Thu Feb 17, 2011 7:04 pm

Net worth is overrated.

Accepting what is.

Note this was recorded without any forethought, just stream of thought while I am in deep in programming just a few moments ago...

http://coolpage.com/accepting.mp3

The recording is my biblical insight into contentment.

I knew I was destined to be poor, even since I learned to love beans & rice. Wealth is not an indicator of success. It is better to have tried and failed, than to have wasted a life on "the highest ROI" as measured in gold & silver or any other metric of money.

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Tension between CR and CTR for advertising!

Post  Shelby on Fri Feb 18, 2011 2:45 am

Aha!

I remember from my high ($5000+ per month) PPC advertising spending back in 2000-2005:

http://www.ppcsummit.com/newsletter/pay-per-click/ad-copy-isnt-just-text/

While it is never a good idea to optimize ad text exclusively to CTR, if you can maintain or improve your conversion rate (CR) while also increasing CTR, you need to do so.

The problem is you can't always get them both to optimize together.

This is why paid advertising is not always optimum model for maximizing knowledge and prosperity. The CR (coversion ratio of visitors to sales) is what matters most for maximizing knowledge and prosperity. The CTR (the click through ratio of clicks to views of ads) is what Google needs to maximize their revenue.

I have realized that the way to maximize CR is if users could compete to suggest sites they like in the context of other sites, with small writeups. Sort of like blogging on another webpage, e.g. I could blog on Hommel's site or cnn.com, etc. The visitor would decide if they want to view these suggestions. They would be ranked by CTR, but realize that then the CR would always be maximized because visitors wouldn't cost anything (no Google and source site charges) and be optimized according to where the CTR is most effective and with an potentially a different ad copy for maximizing CTR for each possible site where the "ad" could be viewed, custom made by the users.

This would totally change the web, because ad sponsored sites would wither away. Knowledge sites and needed products would prosper more, and more efficiently with less waste and middle men.

Yeah I think this would be great. It would drastically shrink Google's future.Tension between CR and CTR for advertising!

I am coming after those sites which are wasting their asset, with technology paradigm shifts that put the power in the hands of the readers to vote on what is most relevant.

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Category Theory is critical to understanding functional programming deeply

Post  Shelby on Sat Feb 19, 2011 8:51 pm

The best explanation I have found, which is comprehensible to someone (like me) without a master's degree in category theory, is "Comprehending Monads" by Wadler.

You can Google for it, there is a PDF online.

I am on page 8, and the first 7 pages were very well written and I was able to digest them in about 1 hour, and I can say so far I understand the first 7 pages very well and deeply/thoroughly (I think).

If you want to compare with a more abstract mathematical tutorial, here is a concise one:

http://www.patryshev.com/monad/m-c.html

Or overview:

http://homepages.inf.ed.ac.uk/jcheney/presentations/ct4d1.pdf
http://www.algorithm.com.au/downloads/talks/monads-are-not-scary/monads-are-not-scary-chak.pdf

Btw, Philip Wadler has been in past 2-3 decades, one of the most important researchers in the field of computer science:

http://homepages.inf.ed.ac.uk/wadler/vita.pdf
http://en.wikipedia.org/wiki/Philip_Wadler


Last edited by Shelby on Mon Feb 21, 2011 6:07 am; edited 3 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Being Popular (computer language)

Post  Shelby on Sat Feb 19, 2011 10:18 pm

http://www.paulgraham.com/popular.html

Of course, hackers have to know about a language before they can use it. How are they to hear? From other hackers. But there has to be some initial group of hackers using the language for others even to hear about it. I wonder how large this group has to be; how many users make a critical mass? Off the top of my head, I'd say twenty. If a language had twenty separate users, meaning twenty users who decided on their own to use it, I'd consider it to be real.

Getting there can't be easy. I would not be surprised if it is harder to get from zero to twenty than from twenty to a thousand. The best way to get those initial twenty users is probably to use a trojan horse: to give people an application they want, which happens to be written in the new language.

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Scala has critical defects; Copute will output to Scala w/o those defects

Post  Shelby on Sun Feb 20, 2011 5:39 pm

Copute will initially output to Scala, this is fastest way to get a debugger/IDE for free, and the mapping from Copute to Scala is very straightforward, so time-to-market should be on the order of 3 months. HaXe has faded away as potential target, both for lack of IDE and also missing critical features such as type parameter co-/contra-variance.

Scala (or maybe C#/.Net except Microsoft is dying) is currently the best hope for next mainstream OO+FP language.

Well I have finally gotten to the point where I think I can enumerate the critical things Copute will be able to do, that Scala apparently can not.

And apparently these affect the very ability to be abstract (i.e. reusable and composable), which is Scala main and mnemonic claim of superiority ("Scala is scalable").

http://copute.com/dev/docs/Copute/ref/intro.html#Scala


P.S. If you had read the Copute docs previously, there have been numerous egregious errors corrected hence. Also the quality of the docs has been significantly improved (although still need more improvement).

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Android is the killer app?

Post  Shelby on Sun Feb 20, 2011 9:48 pm

http://esr.ibiblio.org/?p=2975&cpage=3#comment-297417

Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I'm not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named "cloud computing" and the network became the OS?

http://esr.ibiblio.org/?p=2975&cpage=3#comment-297465

Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?

No. *nix conquered the server long, long ago.

Granted *nix server had majority market share long ago.

ESR cited that 83/23 (78/22%) for new workloads occurred circa 2007, so perhaps the conquering was 90/10% rule (roughly Pareto squared) complete by 2008, thus meeting ESR's deadline.

Is Android the killer app because it paradigm-shifted open source hackers to optimize for hardware without a keyboard-- flatmapping the world closer to the programmers are the users and vice versa? Open source for the masses. On deadline for the roughly 10 year cycle for the imminent arrival of a new programming language for these masses:

1975 he started using “structured programming” techniques in assembly language
1983 a new era dawned for him as he started doing some C programming
1994 when he started doing object-oriented programming in C++
2000, he made the switch to Java

Can Java be the language on Android, invalidating the 10 year cycle deadline? Will the next language be the virtual machine with a smorgasbord of grammars?

Tying this to the OODA and martial arts discussion, note that solving a problem by "mapping to a new context" or "passing through an envelope" abstraction, is a monad model and hence the mention of flatmap. Could the next programming language achieve monads-for-dummies?


Last edited by Shelby on Sat Mar 05, 2011 7:06 am; edited 1 time in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Scala's standard library may have fundamental semantic errors?

Post  Shelby on Mon Feb 21, 2011 6:31 pm

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5289

Shelby wrote:
Perhaps I am missing something, but off top of my head, I am thinking the following is semantically incorrect, because it makes all None equivalent. (Also doesn’t this force Nothing to a supertype of every possible A, so that bind can always be called with the same function whether it is Some or a None?)

Code:
case object None extends Option[Nothing] {
  def bind[B](f: Nothing => Option[B]) = None
}

Is that the way it is implemented in Scala standard library? Seems to me that None should be parametrized too, so that a None for one type (e.g. String) isn’t equal to a None for another which is not covariant (e.g. Int).

Code:
case class None[+A] extends Option[A] {
  def bind[B](f: A => Option[B]) = None[B]
}

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Static methods in interface; doing monads correctly for OOP

Post  Shelby on Tue Feb 22, 2011 4:37 am

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5291

Shelby wrote:
Caveat: none of the following code is tested and I am new to Scala and have never installed the Scala (nor the Java) compiler.

Daniel's "typeclass" is a fully generalized convention for declaring static methods of an interface. Imagine you could declare static methods in a trait with this pseudo-code.

Code:
trait Monad[+X] {
  static def unit[Y] : Y => Monad[Y]
  def bind[M <: Monad[_]] : (X => M) => M
}

sealed trait Option[+X] extends Monad[X] {
  static def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}

To get legal Scala, this is translated as follows, noting the +, -, or no variance annotation on M depend on where Monad appears in the static methods of Monad.

Code:
trait Monad[+X] {
  def bind[M <: Monad[_]] : (X => M) => M
}

trait StaticMonad[+M[_]] {
  def unit[Y] : Y => Monad[Y]
}

sealed trait Option[+X] extends Monad[X] {}

implicit object OptionStaticMonad extends StaticMonad[Option] {
  def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}

Before we can add the cases for Option, note that Monad requires "unit" to be invertible, i.e. bijective, but None has no inverse, so we need an injective monad.

Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
  def bind[Y] : (X => Sub[Y]) => Sub[Y]
}

sealed trait Option[+X] extends InjMonad[Option,X] {}

case class Some[+X](value: X) extends Option[X] {
  def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
}

case class None[+X] extends Option[X] {
  def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
}

Thus Daniel's sequence.

Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
  ms.foldRight( tc.unit( List[X] ) ) { (m, acc) =>
      m.bind(_) { x =>
        acc.bind(_) { tail => tc.unit( x :: tail ) }
      }
  }
}

Note that syntax is peculiar to Scala, here is a more widely readable version:

Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
  ms.foldRight( tc.unit( List[X] ), (m, acc) =>
      m.bind(  x =>
        acc.bind( tail => tc.unit( x :: tail ) )
      )
  )
}

Note my version of Daniels sequence will work with both bijective Monad and injective InjMonad, because the call to bind is a method of the instance; whereas, Daniel's version assumed the injective monad and I see no possible way to fix it using his convention of implicit duck typing of non-static methods. His is an example of how duck typing breaks composability.

==================
**** Monad Theory ****
==================

The best layman's explanation I have found so far is "Comprehending Monads" by Philip Wadler, 1992. Google for the PDF.

Conceptually a monad has three functions:

Code:
unit : X -> M[X]
map : (X -> Y) -> M[X] -> M[Y]
join: M[M[X]] -> M[X]

The map function might be curried two ways:

Code:
map : (X -> Y) -> (M[X] -> M[Y])
map : M[X] -> ((X -> Y) -> M[Y]) // Will use this for trait below

We must overload the map function, if M is not same type as N, because otherwise map will not know which "unit" to call (in order to lift Y => M[Y]), because overloading on return type is ambiguous due to covariance:

Code:
map : (Y -> M[Y]) -> (X -> Y) -> N[X] -> M[Y]
bind : (X -> M[Y]) -> N[X] -> M[Y]
map a b = bind x -> a b x

The reason I rephrased the abstracted monad as a inherited trait with static methods, is so far in my research, I don't agree with a general "implicit" keyword for a language design, because the general use of duck typing can violate the localized single-point-of-truth (SPOT) and can make semantic assumptions that were not intended, because duck typing forces all traits and classes to share the same member namespace, and thus essentially bypasses the behavioral conditions of the Liskov Substitution Principle contract of OOP. Also, since duck typing does not explicitly state which interfaces are required at the SPOT of the trait or class declaration, there is no way to know which interfaces are available by looking in one place. Localization (separation) of concerns is a critical attribute of reusable/scalable software design. Again the following is pseudo-code for the translation of static methods to implicit, but now fully generalized to monad theory.

Code:
trait Monad[+X] {
  static def unit[Y] : Y => Monad[Y]
  def bind[M <: Monad[_]] : (X => M) => M
  def map[M <: Monad[Y], Y]( a : Y => M, b :  X => Y ) : M = bind x => a b x // bind( x => a( b( x ) ) )
  static def join[M <: Monad[_], Y] : M[M[Y]] => M[Y]
}

But the above trait won't work for monads whose "unit" is not bijective, i.e. where the inverse of "unit" is lossy, e.g. None option has no inverse. The injective monads thus know which "unit" to call, thus we could add a map to our prior injective monad, which does not input a "unit".

Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
  def bind[Y] : (X => Sub[Y]) => Sub[Y]
  def map[Y] : (X => Y) => Sub[Y]
}

sealed trait Option[+X] extends InjMonad[Option,X] {}

case class Some[+X](value: X) extends Option[X] {
  def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
  def map[Y]( f : X => Y ) : Option[Y] = ObjectStaticMethod.unit( f( value ) ) // Some( f( value ) )
}

case class None[+X] extends Option[X] {
  def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
  def map[Y]( f : X => Y ) : Option[Y] = None[Y]
}

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5333

Shelby wrote:
I will offer two improvements to my prior comment-- the prior comment wherein I had proposed a conceptual mapping of pseudo-code "static" interface members to legal Scala syntax.

Note that the StaticMonad trait (in my prior comment) is necessary to enable accessing "statics" on types (e.g. M[_]) that are otherwise unknown due to type erasure (e.g. Daniel's Sequence function example), but StaticMonad is not used for direct invocation of statics, e.g. Option.unit( value ). Thus a necessary improvement is to name object OptionStaticMonad to object Option, which makes trait Option its companion class (or does Scala only allow this if Option is a class?):

Code:
implicit object Option extends StaticMonad[Option] {
  def unit[Y]( y ) = Some( y )
}

Also, to give functionality similar to what we expect for "static" in Java, (some macro or other language to Scala compiler) could automatically generate the statics for each derived class in pseudo-code that did not override them, e.g. as follows although this example seems superfluous, it is not harmful and the generality is needed in other examples.

Code:
implicit object Some extends StaticMonad[Some] {
  def unit[Y]( y ) = Object.unit( y )
}

implicit object None extends StaticMonad[None] {
  def unit[Y]( y ) = Object.unit( y )
}

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5334

Expounding on my prior comment, the situations where StaticMonad[M] is employed (versus directly accessing the singleton), type M is unknown, and thus is a more composable abstraction which inverts the control of the access to the singleton statics, and gives that control to the caller:

http://en.wikipedia.org/wiki/Hollywood_Principle
http://lists.motion-twin.com/pipermail/haxe/2011-February/041527.html

Type erasure is an orthogonal issue, that forces the use of an implicit as function parameter, versus the compilation of a separate function body for each possible M in reified languages. Even if Scala was reified, trait StaticMonad is still necessary to abstract the inversion-of-control on singletons. Thus the declaration of implicit instances and parameter is justified by type erasure, but they (along with StaticMonad) could just as well be hidden to make a non-reified language appear to be reified. Which is what I was illustrating with pseudo-code examples.

Note in Copute, Daniel's sequence would be coded:

Code:
pure sequence<M<X> : Monad<X>, X>( ms : List<M<X>> ) = {
  ms.foldRight( M.unit( List<X> ), \m, acc ->
      m.bind(  \x ->
        acc.bind( \tail -> M.unit( tail.append(x) ); );
      );
  )
}

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5347

Shelby wrote:
A variance annotation on Sub was missing in my prior comment, and should be as follows:

Code:
trait InjMonad[+Sub[_] <: InjMonad[Sub[_],X], +X] {
  def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}

Without that change, then Some and None would not be subtypes of Option, because Sub was invariant.

Also I am thinking the following is more correct, but I haven't compiled any of my code on this page:

Code:
trait InjMonad[+Sub[X] <: InjMonad[Sub,X], +X] {
  def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}

I am not sure if that is legal in Scala, but seems to me that is the only way to express that Sub's type parameter is X.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5349

Shelby wrote:
My prior idea for expressing X to be the Sub's type parameter is not retracted.

However, my suggestion for a covariant annotation on Sub, is erroneous for the reason I had stated in prior comments-- Sub's lifted state may not be invertible (e.g. None has no value of type X) and thus there may be no mapping from a Sub to its supertype. Thus the correction to restore an invariant Sub, and keep the other idea, is:

Code:
trait InjMonad[Sub[X] <: InjMonad[Sub,X], +X]{
  def bind[Y] : (X => Sub[Y]) => Sub[Y]
}

It was incorrect when I wrote that this invariant Sub prevents Some and None subtype of Option. Some#bind and None#bind type signatures contain Option, not Some and None respectively.

I can not think of a useful subtype that would overload bind, but in which case, in my paradigm the subtype could multiply inherit from Monad with distinct Sub. In Daniel's typeclass paradigm this would entail adding another singleton implicit object that inherits from Monad.


Last edited by Shelby on Wed Mar 09, 2011 3:59 pm; edited 15 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Americans are innovators/individualists; Europeans are followers/statists?

Post  Shelby on Tue Feb 22, 2011 4:45 pm

http://esr.ibiblio.org/?p=2975#comment-297554

All of our US customers purchased the product because they wanted to do something different with it. Like connecting a tektronix vector graphics terminal, or a numerically controlled underwater welding machine.

All of our European customers purchased the product because they wanted something that worked just like the IBM products but cheaper.

http://esr.ibiblio.org/?p=2987#comment-297669

Overall this Europe = socialism meme is overdone by both sides of the political spectrum in America. It’s more like America is 45% socialist, while European countries vary from say 45-65% (yes, I’d put Switzerland on par with America). And that doesn’t always come in the same places, either. While Sweden is seen by many as the archetypal Euro-socialist state, and it certainly has much higher taxes than USA, it doesn’t, for instance, have a minimum wage, and Britain with its NHS has far less union militancy than the USA seems to.


Last edited by Shelby on Mon Feb 28, 2011 5:57 am; edited 1 time in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Pre/postconditions can be converted from exceptions to types

Post  Shelby on Tue Feb 22, 2011 6:14 pm

Major breakthrough! I figured out how to convert pre-condition rules to post-conditions on unboxing types! Wow!

Shelby wrote in email:
Hi Barbara Liskov, PhD,

Has anyone else done research showing that all pre- and post-conditions can be converted from exceptions to types?

Here is my one paragraph exposition:

http://copute.com/dev/docs/Copute/ref/intro.html#Convert_Exceptions_to_Types

On a related topic on how I applied your principle to: interface vs. mixin, I hope I haven't misapplied your famous Liskov Substitution Principle:

http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method

My abstract conclusion to incite your interest, "Thus Liskov Substitution Principle effectively states that whether subsets inherit is an undecidable problem.".

Here is a bit more:

"In order to strengthen the semantic design contract, it has been proposed to apply preconditions and postconditions on the variants of the interface. But conceptually such conditions are really just types, and can be so in practice. Thus, granularity of typing is what determines the boundary of semantic undecidability and thus given referential transparency then also the boundary of tension for reusability/composablity. Without referential transparency, granularity increases the complexity of the state machine and this causes the semantic undecidability to leak out (alias) into the reuse (sampling of inheritance) state machine (analogous to STM, thread synchronization, or other referentially opaque paradigms leaking incoherence in concurrency)."

Other places where your LSP is discussed:

http://copute.com/dev/docs/Copute/ref/function.html#Parametrized_Types
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Type
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
http://copute.com/dev/docs/Copute/ref/intro.html#Scala


Note all of this is a work in progress, so there may numerous mistakes.

Apologies if I have abused your time.

Best Regards,
Shelby Moore III

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Void or Nothing never create referencable instances

Post  Shelby on Thu Feb 24, 2011 12:06 am

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5308

Shelby wrote as "III erooM yblehS":
For some reason the blog is not accepting my post under my former name. This is very important.

Any and Nothing

Iry's article predicates that List[Nothing], a/k/a Nil, is necessary because List#head would otherwise throw an exception for an empty list. But there is another way, List[T]#head could return an Option[T]. In the way Option is currently structured, that would be an Option[Nothing], but I proposed instead it could be a None[T]. So seems Nothing is not required in this use case? Is there any other compelling reason for Nil? My K.I.S.S. design instincts tell me to avoid unnecessary special case idioms, so I would toss Nil if does not have any other compelling reason to exist. Note the "stupid" in KISS is not the designer, it means do not reduce abstraction with more refined knowledge (more narrow subtypes) than necessary.

Maybe I am mistaken, but in the referentially transparent calculus, a function can never return Nothing (a/k/a Void), because a function must not have side-effects, thus it has no reason to exist (or be called) if does not return a value. Thus an instance of Nothing can never be created in such calculus, and can only be referenced by cast from a contravariant type parameter. In short, Nothing can never be substituted for another type, except as a contravariant type parameter.

If we reason about a type as analogous to an overloaded function where covariance is defined by overloads that have contravariant parameter, and covariant return, types (where all types are just functions), then the Nothing type is the infinite set of functions that take parameters of type Any and return Nothing, and the Any type is the infinite set of functions that take parameters of type Nothing and return Any.

Thus Any and Nothing are only ever strictly necessary as references to a concrete covariant or contravariant instance respectively, because any program can be restructured into a referentially transparent one (e.g. by using a State monad).

The key insight is that a type system is not substitutable in the contravariant direction.

Thus Option[Nothing] or List[Nothing] when used as the return type where there is a covariant type parameter, expresses that the type parameter can be anything. But this violates the contract that Liskov Substitution Principle depends on that a supertype has a greater set of possible subtypes, than any of its potential subtypes. A covariant type system is (injective) one-to-many in the covariant direction, but this is not invertible (bijective) in contravariant direction. Generality does not increase in the contravariant direction, but generality does decrease in the covariant direction.. In short, Nothing must never be on the right-hand side (rhs) of a substitution in a covariant type system (except as a contravariant type parameter). Violation of this rule creates aliasing error, as I explained in my prior comments-- None (a/k/a Option[Nothing]) erases the concrete type parameter of an instance and means literally "I forgot and I am in unknown random state" (no many-to-one mapping allowed in the type system). The Scala compiler should be giving an error when Nothing is substituted for a covariant type parameter in Option[Nothing].

Unlike Any, Nothing should be only a reference type, mean you can substitute any contravariant subtype to it, and cast back to any contravariant subtype from Nothing. You must never substitute Nothing for covariant type. Maybe Scala gets away with this for now due to type erasure (or maybe we can find an edge case where it fails), but it is on shaky ground.

Unless someone can find a hole in my analysis, I am thinking to email this to Odersky or file a bug report for Scala on this. But certainly I must be wrong? This is already well entrenched in Scala. I hope I am wrong.

@anonymous
Hopefully I have explained why the cost of a single None for all T, is semantically erroneous and will break. If I am mistaken, I appreciate anyone that can elucidate.

====================
Okay after further thought, although (so far) I maintain my stance against Nil and Option[None], because Option[T] could suffice for an empty container...

I realize that, None extends Option[Nothing], is not a problem, because it can not cause a reference to an instance of Nothing. Even if there are references to Option[T] for different T, that point to instances of Option[Nothing], there can never be different references pointing to Nothing. Even if we cast the Option[T] references to Option[Nothing] references, then we've lost the ability to cast back to Option[T], but this is not a problem per se-- it is the None designers choice. It is oddity that is not available in the contravariant direction (i.e. with Any), because contravariance only occurs on type parameters. The designer is implying that None are either always equal or never equal, regardless of supertype. Thus, the only potential problem with None, is if #equals does not always return false if either parameter is None (a/k/a Option[Nothing]). Also the orthogonal problem that #get throws an exception.

Click here for a summary that provides context. I excerpt the key portions.

Any (covariant) type, except Nothing, may be substituted for (i.e. assigned to) the abstract type Any (the most general type), i.e. all (covariant) types implicitly inherit from Any. Whereas, an Any must not be substituted for (i.e. assigned to) another (covariant) type, without a runtime cast that has been suitably conditioned to avoid throwing an exception. A Nothing will never be substituted for (i.e. assigned to) another type, because a reference to a Nothing can never be created.

Any contravariant type parameter (only type parameters have the possibility to be contravariant), including Any, may be substituted for (i.e. assigned to) the abstract type Nothing (the least general type, which means "nothing"), i.e. all contravariant types implicitly inherit from Nothing. When a Nothing occurs in the context of inheritance from a covariant, or substitution by a contravariant, type parameter, it will never create a reference to a Nothing because due to substitutability rules a contravariant type parameter can never be returned nor read, and in the covariant inheritance case, an instance of Nothing is never allowed to be constructed, because by definition Nothing does not know which covariance tree branch its instance may be on.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5313

@Horst
Hello,
I later discovered via trial&error that my posts were not being accepted because I was posting from an IP address that is apparently spam-blocked by the blog service this site uses (WordPress?). So I changed IP addresses (via a VPN I have in UK or USA, whereas I am in Asia), then I posted under the mirrored name, and the new name caused my posts to go into moderator queue (hence they've been released from queue to the site, I assume by Daniel).

Once I realized that I had some lapses in logic in my posts under the mirrored name, I posted the corrected logic under my unmirrored name, on the unblocked IP address, thus my last post was appeared on the site immediately (before the ones in queue).

Some of your reply apparently does not reflect the corrections in logic I made in my last post, i.e. appears you may be replying to the post I made under the mirrored name without acknowledging that I already corrected the logical error you are pointing out.

For example, I think maybe (not sure) you missed the point I was making about Liskov Substitution Principle. First of all, in addition to method parameter and return types, there are several more requirements (click here). Thus, LSP requires that every subtype has fewer possible states than its supertype, and this is exactly what gives rise to the requirements on methods. This will become more clear if you study this link (click here), which illustrates that LSP really applies to the set of possible states.

Thus I was pointing out that Nothing is a subtype of every possible class, thus it has more possibilities than any supertype, except Any. Thus my first post asserted that it violated LSP if assigned in a covariant position (and actually that is still true, if the assigned not as a type parameter and if assigned as instance, but this can never occur). But then in my last point, I made the point that if Nothing can never be created as instance (as you say a "blank"), then LSP does not apply in that respect, because Nothing is never the type of a method parameter, nor an assignable return value. We thus see Nothing only ever applies for type parameters or return type that is never assigned. Thus the problem of Nothing is avoided. Agreed, thus there is no aliasing error because the number of instances of Nothing created is exactly 0, which is what I explained in my last post.

Disagree, None should not equal None, because if I have two Option[T] references for different T, both of which point at a None, then they should not report they are equal.

My point about Iry's article was not that Nothing is unnecessary in every use case, only to state that if List#head returns Option[T] then we have to unbox it with "match-case" to get at the T value. Thus whether None is an Option[Nothing] or None[T] is arbitrary design choice. Either way, we have to do a match-case and handle the possibility of an empty List.

List may or may not return an Option[T] (I haven't checked), yet if not, I can code a Scala library where List#head returns an Option[T]. We are not forced to use the standard library. We are free to explore all possibilities and choose. It helps to make the standard library better, if we question everything, and accept only what passes our best analysis. I think Linus Torvalds stated, as paraphrased by Eric Raymond, "Given enough eyeballs, all suboptimal things ('bugs') are shallow". I have not yet formed a final opinion, discussion is part of discovery and analysis.

Nothing (a/k/a void) is only needed for return types that never return (which means they don't apply in a strictly referentially transparent system like Haskell if an error monad is employed, and they never create instances in a system with side effects like Scala, Java, etc), or for type parameters, which also means an instance is never created. So reasoning about bottom types are not really needed, except as a supertype for contravariant type parameters. All other cases were handled without thinking and teaching void to be a "bottom type" (although that is what is was). This is important for me, because I am putting much thought into how to simplify Scala and make it more palatable for the masses. That is why you are seeing the genre of analysis from me that you are-- I am questioning everything. It is my job to do so.

Friendly reminder, your use of argument and parameter is reversed from their definitions. The parameter is in the declaration of the function, the argument is in the function call (apply).

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5316

@Horst
Hi again,
Adding members to a subtype does not increase the # of potential LSP states, as the states that LSP is referring to are types in the inheritance hierarchy, not data in the types.

Nothing has infinite possible LSP states, just as Any does. The only reason we can use Nothing without breaking LSP, is because there can never exist a reference to Nothing. Nothing is not a referencable type. Try creating a reference with the type of Nothing in Scala, I expect it won't compile.

val any : Nothing = any instance you like

And that is why I find it misleading to use Nothing in None, because it causes confusion for anyone who thinks of LSP fundamentally (not that many people do, but the concept of a bottom type is foreign to most OOP programmers). But on further thought, one eventually realizes that Nothing can be used as a type parameter without ever creating a reference to Nothing. Imagine trying to teach this to Java programmers? I am thinking I will rather hide Nothing as Void (ah everbody is comfortable with void as "nothing" for return type and functions that take no parameters), and avoid ever mentioning it for use as a type parameter, except for contravariant type parameters, which are rare because they can never be read. Generality is nice and dandy, except when the Java community might have to expend 10,000 man-hours to deal with all the confusion about "what is Nothing?" for a concept that only is necessary for 0.001% of the use-cases.

I am not in my last 2 posts (or include this one, so 3) asserting that Option[T] can not point to None, instead of None[T]. I am saying that None should never report true for #equals. Lets not conflate orthogonal issues.

Please check my logic here. None should never equal itself. Let me repeat as an example:

val os : Option[String] = None()
val ol : Option[List] = None()

if( os == ol ) // Do I have the same list or string?

That assumes that Some[T] implements #equals by comparing the value it stores.

Apparently Haskell does have a bottom type, but you would only use it when your monad called some system function that could never return:

http://www.haskell.org/haskellwiki/Bottom

That is why I said you wouldn't encounter it if you used an error monad that would instead spool (chain) all errors back out to the main return, i.e. that an error call would simply unwind the function stack, and not cause a non-returning function.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5318

I summarized it more concisely now:

Void is not a referencable type, it is only used to express inheritance (covariance) relationships, or the absence of function parameters and/or a return value.

Replace Void with Nothing for Scala.

Off topic: Copute separates trait into interface and mixin, where mixin is not a referencable type, so the concept of non-referencable type is not so esoteric.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5321

@Horst
Thanks for running the test, because I have never run a Java, Scala, or Haskell compiler or interpreter in my life (I do it all in my head for now, because it forces me to think more deeply).

So that looks like a bug to me, as I expected. It should report false, but it reports true. The null bug doesn't make semantic sense to me either. All of those look like bugs to me. I vaguely remember having seen that null bug mentioned as a bug in general language design circles.

Nothing has infinite potential subtypes in the contravariant type parameter case. In the covariant case, it has infinite potential supertypes. But this does not cause a problem because it can never appear as an assignable reference in any covariant or contravariant case, and LSP is all about reference substitutability (you can refer that link I gave before about LSP as sets of possible states). Thus LSP does not apply to Nothing, because we are never substituting any reference with or from it. Nothing is only used for expressing inheritance relationships, or for indicating no value, never for actual substitutions.

Agreed on having a sound type system with no edge cases that fail.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5324

@Daniel
In the case of general programming and logic constructions, of course you are correct, that we need to distinguish for functions between:

a) never returns, does not exist (Nothing, undefined in JavaScript)
b) returns but no value, exists but has no value (Void, Unit, null in JavaScript)

a) non-callable/unreachable (Nothing, undefined method in JavaScript)
b) called with no parameter (Void, Unit parameter)

Thanks for raising the issue.

But remember, I made the subtle (and probably buried) point that in a 100% referentially transparent, statically typed language, we never have functions that don't return, nor functions that don't return a value, nor functions that are non-callable, although we may have functions which are called with no parameter-- they are the functional equivalent of constants. So in that genre of language where there is no overlap between them, there is no need to have separate Void and Nothing, so one could decide call them by the same name (what I meant by "hiding Nothing in Void" since the inheritance uses are so esoteric and rare), and then use Void semantics for functions and Nothing semantics for inheritance, but always with one keyword Void.

We can relax the requirements for non-overlap. We just require to never have functions that don't return (notice that Copute disallows thrown Exceptions, use an error monad if you want to unwind the function state to terminate), nor functions that are non-callable.

In the Curry-Howard isomorphism, Bottom corresponds to a false formula (and Void a true formula), which means any program that contains a Bottom with respect to a function, is not a true formula in the corresponding logic mapping. Thus eliminating Bottom in function context, makes our programs true proofs of themselves. A false formula introduces undecidability.

If I say that Void is unreferencable, then it is consistent with a unit type for use in functions. It is also consistent with a nothing (bottom) type in inheritance.

I hope I didn't make a egregious error, but that seems correct to me at this groggy moment 18 hours into my work day.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5325

@Horst
If I remember correctly, the legacy treatment of null is due to implicit conversion to boolean false, so that if( object ) tests are less verbose than if( object != null ). So false == false is true.

One problem in many languages is that null is hard-wired to every type, so we are forced to test for null every where. Modern languages supply the Option type (Exception monad) instead, which we can them employ selectively on only the types we need exceptions on.

If the semantics of null is "does not exist", then why should ("does not exist" : Option[String]) == ("does not exist" : Option[List]) be true? For that matter, hy should ("does not exist" : Option[String]) == ("does not exist" : Option[String]) be true?

The programmer is asking of equals in that context, "are these pointing to the identical pair of Some instances?". Not asking "are these pointing to the identical pair of either both Some or both None/null instances, but not one of Some and one of None/null?".

Shouldn't the answer should agree with the question.

Very sleepy, hope this was coherent. Probably I can not add anything more of significance on this issue of None#equals.

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5327

@Horst and @anonymous
Even though I originally (in my early comments) noticed that None is an object, somehow it escaped me, and then I didn't correlate your point a few comments ago, that the key advantage of,

case object None extends Option[Nothing]

versus,

case class None[+T] extends Option[T]

is due to the object versus class, the former only needs one instance of None for the entire program, which will be much more efficient when passing these Option types everywhere we would need a null in other languages. I of course originally thought the single object was a problem, until I realized the distinction between Object[Nothing] and the inability to create a reference to Nothing, and the orthogonal issue being the equality comparison result.

So I fully capitulate that the first way is superior. Until someone can elucidate otherwise, I think equality comparison for None should always return false, even when the other operand is None.

Thanks to both of you and Daniel for all the help. I hope this has been stimulating or otherwise helpful in some way. Apologies if we veered from the monad theme of this article.

My original main point was to show that the bind operator portion of a monad can be an inherited trait instead of an implicit typeclass. I am happy the discussion forked, because I gained numerous language design insights.

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

re: Every 10 years we need a new programming language paradigm

Post  Shelby on Thu Feb 24, 2011 10:50 pm

The blog author has been programming since 1969, the commentary is that Scala allows too much complexity (unreadable, write-only language), but that a subset of Scala could be, and they are asking for Copute's planned feature-set:

http://alarmingdevelopment.org/?p=562 (note I suggested HaXe on that thread, because Copute isn't ready yet)

Every 10 years we need a new programming language paradigm:

http://goldwetrust.up-with.com/t112p120-computers#4141

Shelby wrote:http://creativekarma.com/ee.php/weblog/about/

In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.

In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.

Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”

The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.

About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.

Included in my point is that Android may be the killer-app, not OS:

http://goldwetrust.up-with.com/t112p135-computers#4233

============
Is JavaScript the next mainstream programming language?

http://www.richardrodger.com/2011/04/05/the-javascript-disruption/

Well not server-side, and it lacks referential transparency ("immutability"):

http://blog.objectmentor.com/articles/2008/12/29/a-wish-list-for-the-next-mainstream-programming-language

Note that most widespread language is Intel assembly code. It is possible that the next mainstream language could be one that compiles to JavaScript to run on clients, i.e. JavaScript is not a high-level language (e.g. GWT compiles Java to JavaScript) as it lacks some critical features discussed below.

Here are pertinent articles on the next big mainstream language:

http://www.jroller.com/scolebourne/entry/the_next_big_jvm_language1
http://eugenkiss.com/blog/2010/the-next-big-language/ (Note Copute can do Rust's Typestate)
http://steve-yegge.blogspot.com/2007/02/next-big-language.html
http://lambda-the-ultimate.org/node/1277

===================
Current language rankings:

http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html

===================
http://itc.conversationsnetwork.org/shows/detail4764.html

While criticizing the verbosity and complexity of current high-level languages, Rob Pike gives his version of history:

@ 4:30min Rob Pike, co-creator of Go language wrote:How did we get here?

1) C and UNIX became dominant in research.

2) The desire for higher-level languages lead to C++, which grafted the Simula style of OOP onto C. It was a poor fit, but since it compiled to C, it brought high-level programming to UNIX.

3) C++ became the language of choice in parts of industry and in many universities.

4) Java arose as a cleaner, stripped down C++.

5) By the late 1990s, a teaching language was needed that seemed relevant, and Java was chosen.


Last edited by Shelby on Tue Jun 21, 2011 10:23 pm; edited 12 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

"Why the future doesn't need us." - I disagree

Post  Shelby on Fri Feb 25, 2011 7:37 am

Tangentially, note there is a more complete (adds images and important links) and easier-to-read version of the essay Understand Everything Fundamentally.

http://www.wired.com/wired/archive/8.04/joy.html

Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification.

Here he quotes the famous Ray Kurzweil, who quoted Theodore Kaczynski, the Unibomber:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them.

[...]

Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.

Let us do something about that. I think humans will always be smarter than machines, the machines are just tools.

Bill Joy wrote:the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate

Tsk tsk. Bill, please I think you are smarter than that. A computer is never smarter than the program it was given. A computer can not program itself, and never will be able to, because it can't be made truly sentient. Creativity can not be modeled, ever. It can be emulated, but only what has been created in past can be emulated. The future creativity belongs to man...well actually...oh never mind you wouldn't believe me any way...

The collective creativity of mankind far outstrips the number of programs that could ever be written, because for one reason, Russell's Paradox says there is no set that does not include itself. It doesn't matter how many millions of times faster you make the CPU, it is not the speed or memory capacity of an individual human brain that should be compared, but rather the fact that the computer hardware is only as smart as the humans that program it-- regardless how fast the hardware CPU is. Faster hardware CPUs will make humans much smarter. For example, my external memory is Google.

Bill, you committed the typical Mathusian mistake. You are as fooled as the ignorant Peak Oilers and Global Warmists. Please come back to your roots and senses.

Math proof: computers can never exceed human creativity

...The collective creativity of mankind far outstrips the number of programs that could ever be written, because for one reason, Russell's Paradox says there is no set that does not include itself...

http://goldwetrust.up-with.com/t112p120-computers#4183

Shelby wrote:
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.

[...]

And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):

* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.

* Linsky Referencing: it is undecidable what something is when it is
described or perceived.

* Coase Theorem: there is no external reference point, any such barrier
will fail.

* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.


* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.

The Emperor's New Mind, by Roger Penrose

http://en.wikipedia.org/wiki/Orch-OR

...Godel's Theorem showed that the brain had the ability to go beyond what could be achieved by axioms or formal systems. This would mean that the mind had some additional function that was not based on algorithms (systems or rules of calculation). A computer is driven solely by algorithms. Penrose asserted that the brain could perform functions that no computer could perform, known as "non-computable" functions.

Penrose went on to consider what it was in the human brain that might not be driven by algorithms. The physical law is described by algorithms, so it was not easy for Penrose to come up with physical properties or processes that are not described by them. He was forced to look to quantum theory for a plausible candidate.

[...]

The quantum waves are essentially waves of probability, the varying probability of finding a particle at some specific position

[...]

the choice of position for the particle is random.

The creativity of the mind is unbounded non-determinism, meaning that the disorder (i.e. # of possibilities) in the universe is always increasing. Thus, the mind can never be entirely described by any static algorithm. If they make a computer to model the synapses structure, then then will need to model subatomic processes that occur within the brain, which are then unbounded nondeterminism. It is impossible to put unbounded determinism in an algorithm-- life itself requires that life can not know itself, because every definable set includes itself (Russell's Paradox, ironically Russell was an atheist and couldn't see the implication of his paradox!).

This reminds me of an article I read today where scientists found an exploded star so dense (60 billion tons per teaspoon) that its neutrons have all aligned and so they are seeping out without any hole for them to seep out through.

Yet again, my Theory of Everything, which is that the universe wraps back onto itself in terms of entropy, has shown itself to explain everything.

(tangentially point: actually it is our perception/measurements which are always finding new possibilities, the universe may already have infinite possibilities)

The following widely accepted principle also supports the above conclusions:

http://en.wikipedia.org/wiki/Uncertainty_principle

Roger Penrose explains more in a video:

http://www.youtube.com/watch?v=yFbrnFzUc0U

A beginning of space-time doesn't exist

In the following video, he is getting closer to my theory, but he misses the point that we can't go back in time, because we would need infinite time to do so, because time in the past sees our clock as slower-- infinitely slower if we want to go back to antiquity:

http://www.youtube.com/watch?v=pEIj9zcLzp0

==================
ADD: to those idiot commentators at the Amazon.com book link to Emperor's New Mind above, yes a computer can be subject to unbound non-determinism and it is known as a "bug". You fools entirely missed the point that the non-determinism is what the algorithms are not. An algorithm is static, dead, not unbounded-- which is precisely why every program will always have "bugs" forever. Even if the bug is that it can't interact with every external state.

http://www.amazon.com/review/R1B73KYRB2LYOP/ref=cm_cr_rev_detmd_pl?ie=UTF8&cdMsgNo=23&cdPage=3&asin=0192861980&store=books&cdSort=oldest&cdMsgID=Mx3AZ9JR4LN385E#Mx3AZ9JR4LN385E

Shelby wrote:
I see that you have entirely missed the point.

Godel's theorem is fundamental, not some straw-man abstraction based on an initial axiom. Let me rephrase it as "any formal theory, in which all arithmetic truths can be proved, is inconsistent". Essentially it is a restatement of Russell's Paradox, which I phrase "there is no rule for a set that does not cause it to contain itself, thus all sets are infinitely recursive". Try as you might until forever, you will never find one exception to Russell's Paradox.

Several other theorems say the same thing in different contexts:

* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.

* Linsky Referencing: it is undecidable what something is when it is
described or perceived.

* Coase Theorem: there is no external reference point, any such barrier
will fail.

* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder (maximum possibilities).

A computer can also be subject to unbounded non-determinism (the phenomenon that Penrose explains gives rise to ever changing human creativity) and it is known as a "bug". Unbounded non-determinism is what static algorithms are not. An algorithm is static, dead, not unbounded-- which is precisely why every program will always have "bugs" forever. Even if the bug is that the algorithm can't interact with every possible external state, which due to 2nd law of thermo, possible states of the universe are always increasing.

Olly you are looking for an explanation of what consciousness is, but nothing can be alive and explain the rules for what it is. And that is fundamental. So fundamental that it actually proves that science can never measure creation. Go to my site goldwetrust.up-with.com and read more in the Technology and Knowledge sections.

I won't be checking back here, contact me at my site if you want to discuss it further.

==================
Professor Chomsky,

In all due respect for your expertise in the field of linguistics, I am not surprised that in your interview on faith ( http://www.youtube.com/watch?v=ewP5tNLBb2E ), that you would quote Bertrand Russell to support an irrational denial of Russell's own Paradox:

http://goldwetrust.up-with.com/t112p135-computers#4264

P.S. I admire your many logical statements in various interviews. I simply suppose you've been lacking a key insight. So that is why I emailed you. Hope you are not perturbed by my audacity.

Shelby Moore III


===============
ADD: Rat Brain Modelers Denounce IBM's Cat Brain Simulation as "Shameful and Unethical" Hoax
The Blue Brain project leader says that IBM's simulated brain does not even reach an ant's brain level

http://www.popsci.com/technology/article/2009-11/blue-brain-scientist-denounces-ibms-claim-cat-brain-simulation-shameful-and-unethical

IBM's claim of simulating a cat cortex generated quite a buzz last week, but now the head researcher from the Blue Brain project, a team that is working to simulate its own animal brain (a rat's), has gone incandescent with fury over the what he calls the "mass deception of the public."

Henry Markram leads the Blue Brain project that successfully simulated a self-organizing slice of rat brain at the École Polytechnique Fédérale de Lausanne in Switzerland. He has issued a point-by-point denouncement of the cat claim that bubbles with outrage at IBM Almaden's Dharmendra Modha.

"There is no qualified neuroscientist on the planet that would agree that this is even close to a cat's brain," Markram writes in his e-mail to IBM. "I see he [Modha] did not stop making such stupid statements after they claimed they simulated a mouse's brain."

Markram calls the IBM simulation a "hoax and a PR stunt" that any parallel machine cluster could replicate. He adds that creating a billion interactive virtual neuron points represents no meaningful achievement as far as simulating intelligence, but merely reflects the brute supercomputing power at IBM's disposal.

"We could do the same simulation immediately, this very second by just loading up some network of points on such a machine, but it would just be a complete waste of time -- and again, I would consider it shameful and unethical to call it a cat simulation," Markram says. He suggested that IBM's simulation feat does not even reach the levels of ant intelligence.

The Blue Brain researcher concludes by expressing his shock at IBM and DARPA's support of the virtual feline brain, and says that he would have expected an ethics committee to "string Modha up by his toes." Yikes.

Still, Markram has a point. Creating any sort of artificial intelligence has long represented a difficult and arduous process, and so expecting a miracle breakthrough seems unlikely. Perhaps we should have paid more attention to the novel Good Omens, where Hell's agent Crowley owns "an unconnected fax machine with the intelligence of a computer and a computer with the intelligence of a retarded ant." To add some more perspective, that book was published back in 1990.

animemaster
11/23/09 at 4:43 pm
Why cant they simulate every cell down to the molecule and fold of each preteen. Now that would be a show of brute force worth wowing over.

Abandonfish
11/23/09 at 8:09 pm
Even comparing it to an Ant's brain is quite a leap, a point neuron simulation is definitely not capable of the behavioral intelligence displayed by insects.


==================

http://www.caseyresearch.com/cdd/brain-vs-computer?active-tab=archives

As to processor speed, let’s assume a very conservative average firing rate for a neuron of 200 times per second. If the signal is passed to 12,500 synapses, then 22 billion neurons are capable of performing 55 petaflops (a petaflop = one quadrillion calculations) per second.

The world’s fastest supercomputer, a monster from Japan unveiled by Fujitsu at a conference this past June, has a configuration of 864 racks, comprising a total of 88,128 interconnected CPUs. It tested out at 8 petaflops (which only five months later was upped to 10.51 petaflops). Our brains are nearly five times faster.

But that’s not even half the story. Unlike transistors locked into place on their silicon wafers, synaptic connections can and do move over time, creating an ever-shifting environment where the possible hookups are, for all practical purposes, limitless. Furthermore, there are another 78 billion neurons, give or take, outside of the cortex, hard at work on other complex functions.

The wiring complexity of our brains alone means that in the crude terms we understand computers today, our brains are much more complex than anything we’ve built, and still faster than even the most expensive supercomputer ever built.

On top of that, we are only beginning to understand the complexity of that wiring. Instead of one-to-one connections, some theorists postulate that there are potentially thousands of different types of inter-neuronal connections, upping the ante. Moreover, recent evidence points to the idea that there is actually subcellular computing going on within neurons, moving our brains from the paradigm of a single computer to something more like a self-contained Internet, with billions of simpler nodes all working together in a massive parallel network. All of this may mean that the types of computing we are capable of are only just being dreamt of by computer scientists.

Will our electronic creations ever exceed our innate capabilities? Almost certainly. Futurist Ray Kurzweil predicts that there will be cheap computers with the same capabilities as the brain by 2023. To us, that seems incredibly unlikely.


Last edited by Shelby on Wed Feb 08, 2012 1:37 pm; edited 21 times in total

Shelby
Admin

Posts: 3107
Join date: 2008-10-22

View user profile http://GoldWeTrust.com

Back to top Go down

Page 10 of 17 Previous  1 ... 6 ... 9, 10, 11 ... 13 ... 17  Next

View previous topic View next topic Back to top


Permissions in this forum:
You cannot reply to topics in this forum