[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Additional supported file types are: PDF
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Click here to apply.


[Advertise on 4chan]


File: Tohru2.jpg (155 KB, 677x1699)
155 KB
155 KB JPG
What do you do when you discover something new and/or interesting, which is not useful?

I have discovered a method for finding Bipolar Binary Neural Networks which match arbitrary truth tables. I also always find the smallest such network. However, the size of the search space and the amount of compute needed means that for something small like an MNIST digit classifier, all the computers on earth could not find this classifier using my method, for a hundred trillion years.

Do I publish it anyways hoping someone smarter than me could possibly make it practical somehow? Does it have any value despite the fact that it will almost certainly never have a practical application?

What do you do with your useless research and how do you cope with the time you spent producing it instead of doing something else?
>>
>>16786872
>What do you do when you discover something new and/or interesting, which is not useful?
What every proper scientist does: publish it. Correctness and honesty are optional. If you manage both of those, you're in the top 99% of science papers.
>>
>>16786872
Also maybe if you explain what you're talking about (why did you give your neural networks a mental illness?) maybe you'll get more useful responses.
>>
>>16786872
>it will almost certainly never have a practical application
So did the first steam engine.
>>
>>16786872
there have been lines of research that werent useful until decades later. it happens regularly in physics.
>>
>>16786872
like everyone, you keep it to yourself, keep building on it, and pass it to your kids when you die.
if there is a small chance its exploitable, you keep it to yourself or someone else will exploit it and reap the fruit of your labour.

everyone telling you to publish is either brainwashed or seeking control over you
>>
File: Maid inspection.gif (1.17 MB, 480x480)
1.17 MB
1.17 MB GIF
>>16786876
I will consider this, but publication feels malicious if the information is not useful.

>>16786878
I am not sure what you mean. The Neural Networks I can find with my hardware are very small and only simulate small things like logic gates, or taking three inputs and reversing them. Even scaling from reversing three to reversing four inputs increases the search space so much that I could not find such a network with my Science Computer in reasonable time.

>>16786879
It is nice to see you again, Cult of Passion. You are the nicest person on the board, but my research is no steam engine.

>>16786883
So, publish anyways and hope it is useful some time after I die? I could maybe get behind this.

>>16786887
I don't have kids. If my research was exploitable, I would release it without hesitation. It has no value unused, and I will never have, or possibly even live to see the amount of compute resources needed to use it.
>>
>>16786894
>I am not sure what you mean.
I mean explain what your thing actually is and what it's doing. I know how a neural network works. I assume a binary neural network is one with binary weights. Now, what the hell is a "bipolar" neural network?
>>
File: Tohru3.jpg (1022 KB, 2800x4000)
1022 KB
1022 KB JPG
>>16786903
>Now, what the hell is a "bipolar" neural network?
A Bipolar Binary Neural Network is one where all weights/biases are -1 or 1. The purpose of this is that it results in a network which is easy to translate to HDL. My goal was to find networks and then make them into physical hardware via an FPGA or similar. If this could be done with large and useful networks, they would become faster, physically smaller and use less energy. But the method I have to find them has a search space which is hopelessly large.
>>
>>16786960
Hm. I see. So what exactly stops you from finding a "bipolar neural network" for a NAND gate, composing these NAND nets into bigger nets in some mechanistic fashion then algorithmically simplifying them to get a "minimal" version?
>>
File: Tohru4.jpg (1.86 MB, 2531x3599)
1.86 MB
1.86 MB JPG
>>16786973
I tried composing them. This doesn't result in the smallest possible network. For example, if I compose NAND and NOT, the resulting net is larger than AND found without composition. I could use compositions to determine some upper bound in the search space for a possible smaller network though.
>>
>>16786990
>This doesn't result in the smallest possible network.
Obviously. I'm just thinking there are probably ways to start off from a sub-optimal net that works and then optimize it using dynamic programming. Though I'm not sure it would necessarily end up being faster than whatever you're doing...
>>
>>16786990
Your options are collaborate to improve, publish now for others to improve, and waiting for this whole arena of research to somehow solve in the universe and write back saying "hello" with an identifier claiming to recognize you as its 'father'


I don't actually recommend any of these but FFS publish even on /sci/ if you want.
>>
File: Tohru5.jpg (168 KB, 850x600)
168 KB
168 KB JPG
>>16787545
Thank you for being nice. I will prepare the Python version for publication to /sci/ since /Sci/entists seem to like Python more than Java. It will take a couple days. If PDF posting is still allowed here, I will also publish a commentary on the code. If PDF posting is not allowed, I might put it inside a Maid Card and publish it here with that.

If you want to run it you will need Python, Poetry, Verilog and Graphviz. This is because when it finds a network is uses Graphviz to draw it and emits and verifies HDL for it.
>>
>>16786894
> I will consider this, but publication feels malicious if the information is not useful.

I would say to that you never know what will be useful. It's very hard to tell what kinds of research might be inspired by or related to your work. If even one person reads it and says "wow, that idea might help with X" then it's useful.
>>
>>16787665
>nice
Oh, I don't have a choice. >>>/x/39991446 was me; any appearance of Tohru forces our understanding to advance as my principles require. Cannot actually harm a bridge from one world to another. Honestly my contract would easily push a soul into whatever construct animates/powers her transformation magic without concern for survival or energy requirement (defaulting to mine if there are not better options)

>>16787671
My understanding of this research comes from our exploration of magical history, all I can say is the circuits mapping from human to dragon psi need a couple highly polarized layers of fluid to keep energy domains separate.
>>
I haven't read in depth about your topic and I am not wholly familear with neural networks as I am a researcher in comp EM but your state space issue and "can find the smallest neural network" at first glance reminds me of undergrad naive approaches to solving NP problems where you just compute every possible solution and then iterate over all solutions to find the best one and if you are particularly naive you would classify this as a P time problem now. Considering what you have is mostly an optimization problem I would be cautious that you are not doing the same thing though obviously neural network sizing isn't remotely the same as set cover.

For publishing post something on arxiv and send it around to any academics you know (or unsolicited email works too). Getting something on Arxiv proves your claim to an idea without having to actually submit to a conference if you are unsure of the novelty of the idea. As for usefulness most research isn't useful. The best you can hope for is to have some insight into a problem area or explain something about neural networks or machine learning. If you can do that it is 100% publication worthy. The barrier to publication really isn't that high.
>>
keep



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.