Asking
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Can I find a psychiatrist over here?

Im a fellow resident in another specialty, but I have a few questions that I want to ask if possible?
Im having some serious mood issues that I want to discuss, because Im scared I've been overlooking the symptoms of a mental health disorders over the past few years of my life.
Thank you.
This page is a permanent link to the reply below and its nested replies. See all post replies »
MartinTheFirst · 26-30, M
Even if someone was qualified on here you wouldn't be able to trust them that they are, you should know that
Friendlyperson · 26-30, F
@MartinTheFirst
Yeah, I'm aware... I rather have specific scientific questions, so whoever would wanna fake need to prove their authenticity
MartinTheFirst · 26-30, M
@Friendlyperson Have you tried asking chatGPT?
Friendlyperson · 26-30, F
@MartinTheFirst Good idea, I'll go check it, thanks!
This message was deleted by its author.
MartinTheFirst · 26-30, M
@LadyGrace What are you talking about now?
This message was deleted by its author.
MartinTheFirst · 26-30, M
@LadyGrace Okay first of all, I believe you have no idea how technology works. I'm an AI developer, I know how chatGPT works, what it can do, and what it can not do, and none of the things you stated are physically possible for chatGPT to do. All it can do is take text, put that text through a purely mathematical operation that gives you a unique response for that input. There is no way a hacker could use that to do something to you, besides maybe (and that's a big maybe) sending you text without you writing anything to it. It sounds like either you're making it up, imagining it, or you have some virus that has nothing to do at all with chatGPT.
This message was deleted by its author.
@MartinTheFirst Datalink Networks
https://www.datalinknetworks.net › ...
How to Prevent a Chat GPT Attack
Apr 4, 2023 — Malware: ChatGPT can create and spread malware, which can infect computers and cause serious damage or even data loss.
MartinTheFirst · 26-30, M
@LadyGrace Okay but I know that you're wrong Grace, so start speculating on what actually caused it
This message was deleted by its author.
MartinTheFirst · 26-30, M
@LadyGrace Claim 1: ChatGPT can create malware - Yes, chatGPT can write code, and if you 're a programmer who is trying to create malware, then you can write malware with it. This code will however not be run on chatGPT, and it does not run any of the code it creates. It'd be very difficult for any normal user to make this code execute (they'd have to be a programmer, and then they would know that it was malware in the first place and would not run it).

Claim 2: ChatGPT can spread malware - Any software you use can be used to spread malware. It has nothing to do however with what the software is originally intended for, it can be as innocent as a calculator. Obviously the calculator won't actually be the malware that you're running, but there could be code behind it that you do not see that is the actual malware. In this case, the AI in itself could not possibly be the malware. In order to be safe, only download things you trust... but even then you might end up getting malware. It's just how internet works.
L@MartinTheFirst
Any software you use can be used to spread malware. It has nothing to do however with what the software is originally intended for, it can be as innocent as a calculator.

That's still doesn't take away from the fact that it can still create and spread malware which can infect computers and cause serious damage or even data loss, regardless of what it was intended for.
@MartinTheFirst ChatGPT can potentially generate harmful content, including phishing emails, social engineering attacks, or even malicious code.

According to Information Trust Institute:
As an AI language model, ChatGPT has several safety concerns related to cybersecurity:

Data leakage: ChatGPT is trained on a vast amount of data, which may unintentionally include sensitive information or code snippets that could be exploited.

Misuse for malicious purposes: ChatGPT can potentially generate harmful content, including phishing emails, social engineering attacks, or even malicious code. Malicious actors may try to exploit the technology to create malware or other attack vectors.

Bias and manipulation: ChatGPT might generate content that contains biases or inaccuracies, which could be leveraged by attackers to spread misinformation or manipulate users.
MartinTheFirst · 26-30, M
@LadyGrace Indeed, it's a worry for everyone, and all developers have to constantly double check their security so that they're not compromised. However, just because all software can be used to spread malware, it doesn't mean that it is spreading malware. It's simply fear mongering to target a single application for being able to spread malware, when it actually concerns all applications.
MartinTheFirst · 26-30, M
@LadyGrace Data leakage is not malware, although you should be careful what you write to AI so you do not share sensitive information. It says so on the front page of chatGPT.

Misuse is the same thing i discussed above in case 1.

Bias and manipulation is an ethical consideration on what the AI's "beliefs" are, for example we don't want AI to appear to be racist, but it is not malware.
This message was deleted by its author.
This message was deleted by its author.
MartinTheFirst · 26-30, M
@LadyGrace The problem is that you have no idea what you're actually talking about, you're not a programmer, you don't really know what happened to your phone, so you're spreading misinformation. I don't mind it really as I have no stakes in this, but I think your reasoning is interesting and I hope you learned something at least.
MartinTheFirst · 26-30, M
@LadyGrace Yes that is very peculiar, I would have to see it for myself
This message was deleted by its author.
This message was deleted by its author.
MartinTheFirst · 26-30, M
@LadyGrace If sensitive information is leaked to a bad actor then terrible things can happen, so try not to write anything sensitive. But in the case of AI such as chatGPT, the main concern regarding data leakage has more to do with what it is trained on from the get-go. Imagine for example if the AI was trained on a huge number of medical documents concerning individuals, then this information could be leaked in some complicated way, and people could end up hurt. It has less to do with what one individual wrote to chatGPT, although that's a small concern as well, especially for yourself.
MartinTheFirst · 26-30, M
@LadyGrace In that situation it was wise of you to reset your phone, but trust me, the AI of today can not do that. It must have been a person.
@MartinTheFirst Well I may not be as knowledgeable as you about it but I'm finding that manipulation of ChatGPT is not impossible and with enough knowledge and creativity, bad actors could potentially trick the AI into generating hacking code. On hacking forums, hackers have claimed to be testing the chatbot to recreate malware strains.