>instead of loading whole model I can use tokenizer on large model and decrypt message. So sender could use 400b paprameter model but a phone user can decode it by having access to key, tokens and be able to read contents. So the hardware limitation has been bypassedYou can have central command send texts that looks very human on reddit ,substack,twitter and 4chan that seems not out of place. The user doesn't need to be caught using signal or tor even a internet layer on top of layer has been created>works across cuda, mac and other platforms>seed/password locked>only person who has to have good compute is one crafting message>receiver can decode it easily
>>108254222did your AI tell you that this was a brilliant idea, and that you're absolutely right?
>>108254222 based department. bumping for interest.
>>108254396>did your AI tell you that this was a brilliant idea, and that you're absolutely right?LOl why do u ask jealous
>>108254222the server generating the message can decode it easily too. tokenization is not encryption. I'm honestly not sure what you think is novel about this
>>108254675>the server generating the message can decode it easily too. tokenization is not encryption. I'm honestly not sure what you think is novel about thisyeah that's why i own server
>>108254222This is exactly what I'm working on!!!!