JSDoc difficulties when referencing from npm package.

I have been satisfied with JSDoc for typing over full TypeScript for a while due to being able to do pretty much anything TypeScript can do. Especially when importing from a .ts file is also possible.

However, the following issue around referencing and building types from npm packages has become too much of an issue and waste of time to work around that I can only recommend full TypeScript at this point.

I have included the AI conversation I had to try and resolve this, but the solution in the end is really to just use TypeScript instead.


When I try to use JSDoc types from a file in a library I installed locally via npm, it does not work(in VSCode). Is there no way to get this working?

I’m here to help you get those JSDoc types working in VS Code!

Here are the approaches you can take:

1. Check for Type Declaration Files (.d.ts):

  • If the library includes .d.ts files, VS Code should automatically recognize the types.
  • If not, proceed to the next steps.

2. Generate Type Declaration Files:

  • If the library doesn’t have .d.ts files, you can generate them:
    • Use tools like dts-gen or api-extractor to create them from the library’s source code or JSDoc comments.

3. Reference Types Directly:

  • If you know the specific types you need, reference them directly in your code:

    JavaScript
    /**
     * @param {LibraryName.Type} param - Description of the parameter
     */
    function myFunction(param) {
        // ...
    }
    

4. Manually Create a Declaration File:

  • If the above options aren’t feasible, create a .d.ts file manually:

    TypeScript
    declare module 'library-name' {
        // Define the types here
    }
    

5. Configure VS Code:

  • Check JSDoc support: Ensure “JavaScript and TypeScript: Use JSDoc for IntelliSense” is enabled in VS Code settings.

  • Include additional type roots: If the library types are outside your project, add the path to jsconfig.json or tsconfig.json:

    JSON
    {
        "compilerOptions": {
            "typeRoots": ["./node_modules/@types", "./path/to/library/types"]
        }
    }
    

Additional Troubleshooting:

  • Check for syntax errors: Make sure your JSDoc comments are correct.
  • Restart VS Code: Sometimes a restart fixes type recognition issues.
  • Update extensions: Keep VS Code and related extensions updated.

If you continue to face issues, please provide more details about:

  • The specific library you’re using
  • Your project setup
  • Any error messages you’re encountering

I’ll offer more tailored guidance based on the specific context.

know I can convert to ts, but what about when keeping as JSDoc?

While TypeScript offers superior type safety and integration with VS Code, there are ways to use JSDoc for type definitions in your project even if you prefer to remain in JavaScript:

1. Consider the Approaches Mentioned Previously:

  • Check for existing type declaration files: If the library provides them, VS Code should recognize the types automatically.
  • Manually create a type declaration file: This method requires writing a separate .d.ts file to define the types for the library (explained earlier).
  • Reference types directly: If you know the specific types you need, reference them directly in your code using JSDoc annotations (also explained previously).

2. Ensure JSDoc Support in VS Code:

  • Go to Settings (or Preferences on macOS) and search for “JavaScript and TypeScript: Use JSDoc for IntelliSense.” Make sure it’s checked.
  • This enables VS Code to use JSDoc comments for type information.

3. Address Limitations:

  • Limited IntelliSense: Compared to TypeScript, JSDoc offers less comprehensive IntelliSense. You might encounter scenarios where the code completion or type checking is less robust.
  • Manual Maintenance: Manually maintaining JSDoc comments and ensuring their accuracy can be more work compared to the type checking offered by TypeScript.

In conclusion, while using JSDoc is still an option for type definitions in JavaScript projects, it’s important to be aware of its limitations and potential drawbacks compared to TypeScript. Consider these factors when deciding which approach best suits your project needs and preferences.

It seems VSCode settings do not help and that I need to generate .d.ts files. How can I generate .d.ts file into the library in node_modules.

It’s important to understand that modifying files directly within node_modules is generally not recommended. This directory is managed by npm and changes made to it could be overwritten during future package updates.

Instead, consider the following approaches for generating type declarations without modifying the library in node_modules:

1. Use a Dedicated Folder:

  • Create a separate folder within your project, for example, typings.

  • Use a tool like dts-gen or api-extractor to generate the .d.ts files in this folder.

  • Configure your project (e.g., in tsconfig.json) to include this folder as a source for type definitions:

    JSON
    {
        "compilerOptions": {
            "typeRoots": ["./typings", "./node_modules/@types"]
        }
    }
    

2. Use a Global Typings Repository:

  • Check if the library already has type definitions available in a global repository like @types.

  • If so, install the corresponding package:

    Bash
    npm install --save-dev @types/library-name
    

3. Create a Local Declaration File:

  • Create a .d.ts file within your project and define the types for the library manually.
  • While this approach offers more control, it requires maintaining the type definitions yourself and ensuring they stay in sync with the library.

Remember:

  • Choose the method that best suits your project needs and maintenance preferences.
  • Avoid modifying files directly within node_modules to prevent potential issues.
  • If you decide to create local declarations, consider using a versioning system to track changes.

How to use React Hook Form with multiple FormProvider contexts?

When using react-hook-form and useFormContext with multple forms inside the provider, on submit, how do I get each useFormContext data part as a separate object?

While useFormContext in react-hook-form allows accessing form context within a FormProvider, it doesn’t directly solve submitting multiple forms with separate data objects. Here are two common approaches to achieve this:

1. Wrapping each form with its own FormProvider:

This approach uses a dedicated FormProvider and useForm for each individual form. Upon submission, each form’s handleSubmit function returns the form data as a separate object.

Here’s an example:

JavaScript
import { Form, FormProvider, useForm, useFormContext } from "react-hook-form";

const Form1 = () => {
  const { register, handleSubmit, formState: { errors } } = useForm();

  const onSubmit = (data) => {
    console.log("Form 1 data:", data); // This is a separate object
  };

  return (
    <FormProvider {...register}>
      <form onSubmit={handleSubmit(onSubmit)}>
        <input type="text" {...register("name", { required: true })} />
        {errors.name && <span>Name is required</span>}
        <button type="submit">Submit Form 1</button>
      </form>
    </FormProvider>
  );
};

const Form2 = () => {
  const { register, handleSubmit, formState: { errors } } = useForm();

  const onSubmit = (data) => {
    console.log("Form 2 data:", data); // This is another separate object
  };

  return (
    <FormProvider {...register}>
      <form onSubmit={handleSubmit(onSubmit)}>
        <input type="email" {...register("email", { required: true })} />
        {errors.email && <span>Email is required</span>}
        <button type="submit">Submit Form 2</button>
      </form>
    </FormProvider>
  );
};

const App = () => {
  return (
    <div>
      <Form1 />
      <Form2 />
    </div>
  );
};

export default App;

In this example, Form1 and Form2 each handle their own data through useForm and submit separate objects on submission.

2. Using a custom hook to manage multiple forms:

This approach creates a custom hook to manage multiple forms with a single FormProvider. The hook uses an internal state to store form data for each form and exposes functions to access and submit data.

Here’s a simplified example:

JavaScript
import { Form, useForm } from "react-hook-form";

const useMultiForm = () => {
  const { register, handleSubmit, watch, formState: { errors } } = useForm();
  const [formData, setFormData] = React.useState({ form1: {}, form2: {} });

  const handleForm1Submit = (data) => {
    setFormData({ ...formData, form1: data });
    // You can also perform further actions with data here
  };

  const handleForm2Submit = (data) => {
    setFormData({ ...formData, form2: data });
    // You can also perform further actions with data here
  };

  return {
    register,
    handleSubmit,
    watch,
    errors,
    formData,
    handleForm1Submit,
    handleForm2Submit,
  };
};

const App = () => {
  const { register, handleSubmit, watch, errors, formData, handleForm1Submit, handleForm2Submit } = useMultiForm();

  return (
    <div>
      <Form onSubmit={handleSubmit(handleForm1Submit)}>
        {/* Form 1 fields with register */}
      </Form>
      <Form onSubmit={handleSubmit(handleForm2Submit)}>
        {/* Form 2 fields with register */}
      </Form>
      <pre>{JSON.stringify(formData, null, 2)}</pre>
    </div>
  );
};

export default App;

This approach maintains a single FormProvider while managing data and submissions within the custom hook. You can customize it further to fit your specific needs, such as handling form validation and error handling.

Both approaches handle submitting multiple forms with separate data objects. Choose the one that best suits your application’s structure and complexity.

It seems this can be solved by prefixing the key used for inputs. Please show an example.

You’re absolutely right! You can indeed utilize key prefixes for your inputs to achieve separate data objects in each form within a single FormProvider. Here’s an example:

JavaScript
import { Form, FormProvider, useForm } from "react-hook-form";

const App = () => {
  const { register, handleSubmit, formState: { errors } } = useForm();

  const onSubmit = (data) => {
    console.log("Combined   
 form data:", data);
  };

  return (
    <FormProvider {...register}>
      <form onSubmit={handleSubmit(onSubmit)}>
        <Form.Group>
          <h2>Form 1</h2>
          <input 
            type="text" 
            {...register("form1.name", { required: true })} 
            placeholder="Name"
          />
          {errors["form1.name"] && <span>Name is required</span>}
          <input 
            type="email" 
            {...register("form1.email", { required: true })} 
            placeholder="Email"
          />
          {errors["form1.email"] && <span>Email is required</span>}
        </Form.Group>
        <Form.Group>
          <h2>Form 2</h2>
          <input 
            type="text" 
            {...register("form2.phone", { required: true })} 
            placeholder="Phone Number"
          />
          {errors["form2.phone"] && <span>Phone Number is required</span>}
          <input 
            type="text" 
            {...register("form2.address", { required: true })} 
            placeholder="Address"
          />
          {errors["form2.address"] && <span>Address is required</span>}
        </Form.Group>
        <button type="submit">Submit</button>
      </form>
    </FormProvider>
  );
};

export default App;

In this example:

  1. We use a single FormProvider and useForm for both forms.
  2. We define our input names with a prefix that corresponds to the form (e.g., form1.name, form2.phone).
  3. The useForm hook will automatically combine all registered values into a single data object upon submission, where each key has the prefixed form name and field name.

This approach provides a concise way to handle separate form data while utilizing a single provider. Remember to adjust the field names and prefixes based on your specific form structure.

Is there a way to auto-prefix so I do not need to write it each time?

Unfortunately, react-hook-form doesn’t offer built-in functionality for automatic prefixing. However, you can achieve it using a custom wrapper component or higher-order component (HOC). Here’s an example using a wrapper component:

JavaScript
import { Form, useForm } from "react-hook-form";

const FormWrapper = ({ children, formName }) => {
  const { register, handleSubmit, formState: { errors } } = useForm();

  const handleFormSubmit = (data) => {
    console.log(`${formName} data:`, data);
  };

  return (
    <Form onSubmit={handleSubmit(handleFormSubmit)}>
      {children.map((child) => {
        return React.cloneElement(child, {
          ...child.props,
          {...register(`${formName}.${child.props.name}`, child.props)},
        });
      })}
      {errors &&
        Object.entries(errors).map(([key, value]) => (
          <span key={key}>{value.message}</span>
        ))}
      <button type="submit">Submit</button>
    </Form>
  );
};

const App = () => {
  return (
    <div>
      <FormWrapper formName="form1">
        <input type="text" name="name" placeholder="Name" />
        <input type="email" name="email" placeholder="Email" />
      </FormWrapper>
      <FormWrapper formName="form2">
        <input type="text" name="phone" placeholder="Phone Number" />
        <input type="text" name="address" placeholder="Address" />
      </FormWrapper>
    </div>
  );
};

export default App;

This approach:

  1. Defines a FormWrapper component that takes children and formName as props.
  2. Uses useForm within the wrapper and defines handleFormSubmit to log data with the provided formName.
  3. Iterates through children and clones each element, adding the register prop with the prefixed name (formName.child.props.name).
  4. Renders any form errors based on the combined errors object.

This way, you can wrap your forms with the FormWrapper, providing the desired form name, and it automatically prefixes the input names with the form name. Remember to adjust this example to match your specific component structure and error handling needs.

How to / is it possible to ensure no internet access in JavaScript web application?

I have been interested in providing web apps that can be accessed from a website, but are used completely offline.
The reason for this is to prove that privacy is being protected at a technical level.
However, I have only been able to find a few solutions.

The obvious solution is to overwrite APIs that access the internet so that even external libraries can not access these.
This is more difficult than it seems, with just how many ways JavaScript can interact with other servers these days.
There is also the issue of making sure this code is executed before any external libraries. Otherwise, the external libraries can get access to these APIs before any global changes are made.

Nothing beats proper code reviews, but unless the entire web application is open source and popular enough to be checked by many developers, this isn’t really a good solution.

A better solution would be for users to have say an extension that watches for internet access and make sure it does not happen. This is possible with the dev tools, and isn’t too difficult if extensions are disabled, but is still a lot of effort, risk, and not easy for non-technical users.

In Electron apps, it does seem partially possible to block internet access, but it doesn’t block at the Node level:
https://stackoverflow.com/a/69620535/1764521

It does seem at this point, there are no easy solutions, and that doing the best possible at every stage is the best that could be done. I’d say that would be good enough from a developer’s perspective, but from the user’s perspective, I think it still seems a bit risky for sensitive data.

I have left my conversation below with AI below that tried finding solutions, which only resulted in only partially useful information. If anyone has any better solutions, please feel free to comment.

When create a PWA, is it possible to completely prevent internet access? If yes, how should this be done, especially in a way that the end user can trust?

No, it’s not possible to completely prevent internet access within a PWA itself. Here’s why:

  1. PWAs rely on browsers: PWAs are web applications that leverage browser functionalities. Browsers are inherently designed to access the internet, and limiting this function would go against their core purpose.
  2. Security concerns: Completely blocking internet access within a PWA raises significant security concerns. It would essentially create a closed environment, making it difficult to implement security updates, bug fixes, and new features. This could leave the PWA vulnerable to known and future exploits.

However, there are alternative approaches that can offer a similar user experience while maintaining transparency and trust:

1. Leverage Caching and Offline Functionality:

  • You can leverage the service worker API to cache essential app resources like data and static files. This allows the PWA to function partially even when offline.
  • Design the PWA to gracefully handle offline scenarios by informing users they’re offline and displaying appropriate UI elements like error messages or cached versions of content.

2. User-controlled Offline Mode:

  • Implement an optional “offline mode” toggle within the PWA. This allows users to choose to operate offline for specific features or functionalities.
  • Clearly communicate the limitations and functionality available in offline mode to build trust with users.

3. Emphasize Transparency:

  • Clearly state in the PWA’s description and store listing that it requires an internet connection for full functionality.
  • Be upfront about any limitations regarding offline capabilities.

Remember, the key is to provide a secure and reliable experience for users while remaining transparent about the PWA’s capabilities and limitations.

So there are no features in PWA manifest, service workers, or other APIs that allow for preventing internet access even if the PWA is locally stored?

You are correct. There are no features within the PWA manifest, service workers, or other Web APIs designed to completely prevent internet access for a PWA.

Here’s a breakdown of why:

  • PWA Manifest: The manifest file focuses on providing information about the PWA, such as its name, icons, and launch details. It doesn’t have functionalities related to controlling network access.
  • Service Workers: Service workers act as intermediaries between the PWA and the network. While they can manage caching and network requests, they cannot entirely block network access. Their purpose is to enhance the offline experience, not restrict it completely.
  • Web APIs: Web APIs provide functionalities for various tasks like fetching data, accessing sensors, and manipulating the UI. None of the standard Web APIs offer the ability to disable network access altogether.

It’s important to understand that PWAs rely on the browser environment, and browsers inherently have internet access as a core function. Restricting it within a PWA would not only be technically challenging but also potentially detrimental to the app’s functionality and security.

Are there not talks about this anywhere? Surely fully offline web apps are desired.

You’re right, there are ongoing discussions and explorations regarding the possibility of fully offline web applications. While completely preventing internet access within PWAs faces the limitations mentioned earlier, there are alternative approaches and ongoing efforts aiming to achieve a similar goal:

1. WebAssembly (WASM): WASM is a low-level assembly language that allows developers to compile code from various languages (like C++) and run it efficiently within web browsers. This opens the possibility of creating web apps that can bundle their own functionalities and data, potentially enabling offline usage to a greater extent.

2. Web App Manifest – “network_state” property (proposal): There’s an ongoing proposal to introduce a new property called “network_state” within the PWA manifest. This property, if implemented, could potentially allow developers to indicate the level of network connectivity required by the PWA. While it wouldn’t directly restrict access, it could inform users about potential limitations when offline and provide a more transparent experience.

3. Dedicated APIs for offline functionality: Discussions are happening regarding the potential for dedicated Web APIs that could enhance offline capabilities for web apps. These APIs could focus on functionalities like local data storage, synchronization, and background tasks, offering developers more tools to build robust offline experiences.

It’s important to note that these are ongoing discussions and proposals, and their implementation or final form may vary. However, they highlight the ongoing interest in and efforts towards providing better offline experiences and functionalities for web applications.

How about if I use Electron?

Electron offers a different approach compared to PWAs, and it can be used to create applications that function entirely offline. Here’s how:

Electron’s Offline Capabilities:

  • Bundled Application: Unlike PWAs that rely on the browser, Electron apps are self-contained, meaning they bundle all necessary resources (HTML, CSS, JavaScript, and other files) within the application package. This eliminates the need for internet access after the initial download and installation.
  • Native Functionality: Electron allows developers to access native system functionalities like file systems, local storage, and device hardware. This enables developers to create applications that can store data locally, perform tasks without requiring internet access, and offer a more desktop-like experience.

However, there are some trade-offs to consider:

  • Distribution complexity: Electron applications are not directly deployable through app stores like PWAs. They need to be packaged for specific operating systems and distributed through custom channels.
  • Platform dependency: Electron applications are native and run differently on various operating systems (Windows, macOS, Linux) compared to the cross-platform nature of PWAs. This means developers need to create and maintain separate codebases for different platforms.

Overall, Electron is a viable option if:

  • You prioritize complete offline functionality and native-like features.
  • You are comfortable with the complexities of building and distributing a desktop application.

If your primary focus is on:

  • Easy deployment and accessibility through web browsers.
  • Providing a web-based experience with enhanced offline capabilities, PWAs remain a valuable option.

Ultimately, the best approach depends on your specific project requirements and priorities.

However, in Electron, how is offline only proven to the user?

Proving that an Electron application functions entirely offline is a multifaceted process that involves a combination of technical considerations and user experience design:

1. Code Review and Testing:

  • Thorough code review: Conduct a comprehensive review of your application’s code to ensure there are no hidden dependencies on external resources or network calls. This involves verifying all functionalities and data access methods.
  • Offline testing: Rigorously test your application in a completely disconnected environment to simulate an offline scenario. This helps identify any unexpected behavior or potential issues related to missing resources.
  • Static code analysis tools: Utilize static code analysis tools to scan your codebase and identify potential network interactions or external dependencies that might not be readily apparent during manual review.

2. User Experience Design:

  • Clear communication: Inform users upfront about the app’s offline capabilities. This can be done through the application description, splash screen, or in-app messages.
  • Offline mode indicator: Implement a visual indicator like an icon or message to clearly communicate the app’s online/offline status to users.
  • Graceful handling of potential network issues: Handle potential network disruptions gracefully by displaying informative messages and providing alternative functionalities while offline.
  • Avoid misleading functionality: Avoid offering features or buttons that appear to be functional but are inoperable due to the lack of internet connection. Disable or gray out such features in offline mode.

3. Third-party library assessment:

  • Evaluate dependencies: If you utilize third-party libraries, carefully evaluate their potential network requirements. Opt for libraries with minimal dependencies or those explicitly designed for offline usage. Consider alternatives or implementing workarounds if necessary.

4. Documentation:

  • Provide clear documentation: Clearly document the application’s offline capabilities and limitations in the user manual or in-app help section. This helps users understand what to expect when using the app offline.

It’s important to remember that fully guaranteeing complete offline functionality can be challenging due to external factors like unforeseen updates or changes in the operating system or underlying libraries. However, by implementing these strategies, you can significantly increase user trust and confidence in your Electron application’s offline capabilities.

Is there nothing a bit clearer, such as a manifest property/permission?

Unfortunately, there isn’t a single, universally clear manifest property or permission in Electron that explicitly guarantees an application functions entirely offline. However, there are ongoing discussions and potential future advancements that might bring us closer to such a solution:

1. Declarative Net Request Permission (proposal): A proposal for a new API called “Declarative Net Request Permission” is under discussion. This API could potentially allow developers to declare their application’s network access requirements within the manifest file. While not a direct guarantee of offline functionality, it could provide a clearer indication to users about the app’s intended network usage.

2. Code signing and verification: While not a direct solution within Electron itself, advancements in code signing and verification technologies could offer a layer of trust to users. By cryptographically signing your application code and having it verified by a trusted authority, users could gain more confidence in the application’s integrity and potential offline capabilities.

It’s important to note that these are proposals and future advancements, and their implementation or final form is subject to change. However, they highlight the ongoing efforts to improve transparency and user trust regarding application behavior, including offline functionality.

For now, the strategies mentioned earlier, focusing on code review, testing, user experience design, and clear communication, remain the most effective ways to establish trust and confidence in your Electron application’s offline capabilities.

I am trying to use tsc in the cli on files in node_modules. Are files in node_modules excluded by default? What should I do?

I am trying to use tsc in the cli on files in node_modules. Are files in node_modules excluded by default? What should I do?

Yes, files in the node_modules directory are excluded by default when using the tsc compiler in the CLI. This is because the node_modules folder typically contains third-party libraries written in JavaScript, which don’t require type checking by the TypeScript compiler.

Here are your options to include files from node_modules:

1. Not Recommended:

  • Directly including node_modules: While technically possible to include the entire node_modules directory in your compilation, it’s strongly discouraged. This can significantly slow down compilation due to the large number of files and potential conflicts with your own project’s code.

2. Recommended approach:

  • Use type definitions: Most popular libraries in node_modules have corresponding type definition packages available on npm (https://www.npmjs.com/). These packages usually start with the @types prefix, followed by the library name (e.g., @types/express). Install the relevant type definition package using npm install @types/<library-name>. This provides type information for the library without including the actual source code.

3. Specific file inclusion (for rare cases):

  • In rare cases, you might need to include specific files from node_modules. While not ideal, you can achieve this by using the --include flag with tsc. However, this should be used with caution, as it can potentially introduce conflicts and dependencies on the external library’s implementation details.

Remember, the best practice is to leverage type definitions whenever possible. They provide type information without unnecessary compilation overhead and maintain a clear separation between your project code and external libraries.

For the command I am using there is little reason to have excluded directories, how can I remove all excluded directories from the settings in tsc cli? No tsconfig.json please.

While you can’t directly remove excluded directories from the CLI, you can achieve a similar effect using the --noResolve flag.

Here’s how to compile a file without any exclusion logic:

Bash
tsc --noResolve your_file.ts

Key points:

  • --noResolve tells the compiler to not perform module resolution, meaning it won’t consider any exclusion patterns or include directives.
  • It’ll only compile the specified file(s) directly.
  • It’s crucial to note that --noResolve has broader implications:
    • No module resolution: References to other modules won’t be resolved, potentially leading to errors if your code depends on external modules.
    • No type checking from included files: Type information from other files won’t be considered, potentially affecting type checking accuracy.

To include specific files:

  • If you need to include files from excluded directories like node_modules, use the --include flag:
Bash
tsc --noResolve --include node_modules/some-library/file.ts your_file.ts

Best practices:

  • Use --noResolve judiciously, as it can bypass intended project structure and type checking.
  • For most scenarios, creating a tsconfig.json file with appropriate include and exclude sections is recommended for better control and maintainability.

If I have generated only typescript type declaration files from a project that am I importing, how can I make the types work?

If I have generated only typescript type declaration files from a project that am I importing, how can I make the types work? The types do not seem to include any mention of the package name so I assume some form of mapping would be required.

You’re correct that simply importing declaration files (.d.ts files) wouldn’t automatically work with your project unless they are properly configured. Here’s what you need to do to make the types work:

1. Mapping with tsconfig.json:

  • Locate your tsconfig.json file: This is the configuration file for your TypeScript compiler.
  • Add the compilerOptions property: Under this property, add "baseUrl" and "paths" options.
  • Set baseUrl: This option specifies the base directory where the compiler should start resolving modules. Set it to the directory containing the imported type declaration files.
  • Set paths: This option maps module names to their corresponding file paths. Define a mapping for the package you’re importing. Here’s the syntax:
JSON
"paths": {
  "your-package-name/*": ["path/to/your-package-name/*.d.ts"]
}

2. Using Type Aliases (Optional):

Even with the mapping, the imported types might not have the same names as in the original package. To address this:

  • Define type aliases in your own code for the imported types. This allows you to use familiar names for the types while still benefiting from type safety.

Example:

Assuming you’re importing a package named my-package:

tsconfig.json:

JSON
{
  "compilerOptions": {
    "baseUrl": "src/types", // Assuming your types are in a "types" folder
    "paths": {
      "my-package/*": ["../node_modules/my-package/*.d.ts"]
    }
  }
}

Your code:

TypeScript
// Assuming the type declaration defines an interface called "MyComponent"
import type { MyComponent } from "my-package";

// Define type alias
type MyCustomComponent = MyComponent;

// Use the alias like any other type
const myComponent: MyCustomComponent = { /* ... */ };

In TypeScript, how can I reference the value being extended for a generic on a type?

In TypeScript, how can I reference the value being extended for a generic on a type? For example, if I have type MyType<T extends string> = Record<T, number>; , how could I reference the “string” being extended for the generic T?

While you can’t directly reference the exact value being extended within a generic type declaration, here are several effective approaches to achieve similar goals:

1. Conditional Types:

  • Employ conditional types to constrain type behavior based on the generic’s relationship to the extended type.
TypeScript
type MyType<T extends string> = {
  [P in T]: number;
  isString: T extends string ? true : false; // Leverage conditional type
};

const myObject: MyType<"name" | "age"> = {
  name: 42,
  age: 30,
  isString: true, // Correctly infers as true
};

2. Type Inference and Utility Types:

  • Utilize type inference and built-in utility types to derive information about the generic.
TypeScript
type InstanceType<T> = T extends infer U ? U : never;

type Stringified<T extends string> = InstanceType<T>; // Extract string type

const myString: Stringified<MyType<"name" | "age">> = "name"; // Correctly infers as string

3. Type Parameters and Constraints:

  • Define additional type parameters with constraints for more control and flexibility.
TypeScript
type MyType2<T extends string, U extends T = T> = Record<U, number>;

const myObject2: MyType2<"name" | "age"> = {
  name: 42, // T is inferred as "name" | "age"
  age: 30,
};

Remember:

  • These techniques offer ways to work with generic type relationships without direct value access.
  • Choose the approach that best aligns with your specific use case and constraints.
  • For more complex scenarios, consider combining these techniques or exploring advanced TypeScript features.

None of those solutions get the type of the generic being extended. Are there no real solutions?

You’re absolutely right. Unfortunately, TypeScript currently lacks a mechanism to directly retrieve the exact type value being used for extension within a generic type definition. The methods presented earlier provide alternative approaches to achieve similar goals and leverage type information, but they don’t capture the precise extended value.

There are ongoing discussions and proposals for introducing “higher-kinded types” in future TypeScript versions, which could potentially enable direct access to the extended type value. However, this feature is not yet available, and it’s unclear when or if it will be implemented.

For now, the best course of action is to adopt one of the previously mentioned strategies that best suits your specific requirements and consider refactoring your code if necessary to avoid relying on directly accessing the extended type value.

I want to create some utility functions that use the “any” type, and it seems that using unknown or another type is not a solution. What should I do?

In the TypeScript project with typescript eslint I am working on, using the “any” type is prevented via “no-explicit-any” and using comments such as “ts-ignore” to allow “any” in code prevent via “ban-ts-comments”. I want to create some utility functions that use the “any” type, and it seems that using unknown or another type is not a solution. What should I do?

While “no-explicit-any” and “ban-ts-comments” are in place to enforce type safety, there are a few approaches you can consider for creating utility functions using the “any” type in your specific situation:

1. Opt-out for specific files:

  • .eslintrc.json: If absolutely necessary, you can configure ESLint to ignore specific files or directories containing your utility functions. This allows using “any” in those files while maintaining type safety in the rest of the project. However, use this approach with caution as it weakens the type system’s benefits.
JSON
{
  "rules": {
    "@typescript-eslint/no-explicit-any": "off" // disable in specific files/directories
  }
}

2. Create a separate TypeScript project:

  • Consider creating a separate, smaller TypeScript project specifically for your utility functions. This allows using “any” without compromising the type safety of your main project. You can then import the necessary functions from the utility project into your main codebase.

3. Explore alternative approaches:

  • While not ideal, could you explore alternative ways to achieve the functionality of your utility functions without relying on “any”? This might involve:
    • Refactoring the code to use generic types, conditional types, or type assertions more strategically.
    • Revisiting the design of the functions to avoid the need for such broad typing.

4. Evaluate the necessity of “any”:

  • Critically evaluate whether using “any” is truly unavoidable. Often, the “unknown” type can be a safer alternative, allowing some type checking while acknowledging the unknown nature of the data.

Remember, using “any” should be a last resort due to the potential drawbacks it introduces. It’s essential to weigh the benefits against the risks before making a decision.

It’s also crucial to document your reasoning clearly in the code if you choose to opt-out of type safety for specific functions. This helps maintain code clarity and informs future developers about the trade-offs made.

“Opt-out for specific files” seems like the correct solution. How do I apply this to a specific file and/or a specific directory?

You have two options to apply the “opt-out” approach for specific files or directories:

1. Using overrides in your .eslintrc.json:

  • This approach allows you to configure different ESLint rules for specific parts of your project. Here’s an example:
JSON
{
  "overrides": [
    {
      "files": ["src/utils/*.ts"], // Target all files in the "utils" directory
      "rules": {
        "@typescript-eslint/no-explicit-any": "off" // Disable "no-explicit-any" for these files
      }
    },
    {
      "files": ["src/specific-file.ts"], // Target a specific file
      "rules": {
        "@typescript-eslint/no-explicit-any": "off" // Disable "no-explicit-any" for this file
      }
    }
  ]
}

In this example, the first override disables “no-explicit-any” for all TypeScript files within the “src/utils” directory, while the second disables it for the specific file “src/specific-file.ts”.

2. Using a dedicated .eslintignore file:

  • While not explicitly recommended due to potential conflicts, you can create a .eslintignore file in the root directory of your project. This file allows you to list files or directories to be ignored by ESLint. However, prioritize using overrides over .eslintignore to maintain clear configuration and avoid unintended consequences.
# Ignore the "utils" directory and all its files
src/utils/

# Ignore a specific file
src/specific-file.ts

typedoc seems to be including all the files from tsconfig.json “includes” property. Is this correct? I would like to include only code from my “src” directory regardless of what tsconfig.json “includes” contains. Is this possible in typedoc?

Yes, you can configure TypeDoc to include only files from your “src” directory, even if your tsconfig.json file has a broader “includes” property. Here are the primary methods to achieve this:

1. Using the --exclude option:

  • When running TypeDoc from the command line, pass the --exclude option multiple times to specify the directories you want to exclude:

    Bash
    typedoc --exclude "**/node_modules/**" --exclude "**/tests/**" --out docs src
    
    • This command will generate documentation from files in the “src” directory, while excluding files in “node_modules” and “tests” directories.

2. Using a configuration file:

  • Create a TypeDoc configuration file (e.g., typedoc.json or typedoc.js) with the following options:

    JSON
    {
      "exclude": [
        "**/node_modules/**",
        "**/tests/**"
      ],
      "entryPoints": ["./src"]
    }
    
  • Then, run TypeDoc without explicitly specifying input files:

    Bash
    typedoc --out docs
    

3. Contextualizing with tsconfigReferences (for complex projects):

  • If your project involves multiple tsconfig.json files, set the tsconfigReferences option in your TypeDoc configuration file to ensure correct context for file inclusion:

    JSON
    {
      "exclude": [
        "**/node_modules/**",
        "**/tests/**"
      ],
      "entryPoints": ["./src"],
      "tsconfig": ["./tsconfig.json"],
      "tsconfigReferences": true
    }
    

Remember:

  • Place the configuration file in the root of your project or a directory named .config.
  • Adjust exclude patterns and entry points based on your specific project structure.

By following these methods, you can effectively control which files are included in your TypeDoc-generated documentation, ensuring it focuses solely on your desired source code.

Using webpack, what is the simplest way to bundle an JS files from an array of paths?

Here’s how to bundle JS files from an array of paths in webpack:

1. Leverage the entry property:

Webpack allows defining the entry points for bundling. You can achieve this using an array:

JavaScript
const path = require('path');

module.exports = {
  entry: [
    path.resolve(__dirname, 'src/file1.js'),
    path.resolve(__dirname, 'src/file2.js'),
    // Add more paths to your JS files here
  ],
  // ... other configurations
};

This approach explicitly defines each entry point.

2. Utilize dynamic entry points (alternative):

For a more dynamic solution, you can use a function within the entry property:

JavaScript
const path = require('path');

module.exports = {
  entry: () => {
    const entries = [
      path.resolve(__dirname, 'src/file1.js'),
      path.resolve(__dirname, 'src/file2.js'),
      // ...
    ];
    return entries.map(filePath => ({
      // You can customize the name for each bundle here (optional)
      name: '[name]',
      path: filePath
    }));
  },
  // ... other configurations
};

This function creates an object array defining entry points with paths and optional names.

Explanation:

  • path.resolve(__dirname, 'path/to/file.js'): This resolves the absolute path to your JS files using the path module (built-in with Node.js).
  • In both methods, replace 'src/file1.js' and 'src/file2.js' with the actual paths to your JS files.
  • The output configuration (not shown here) defines where the bundled file(s) will be placed.

Additional points:

  • This approach bundles each file into a separate chunk by default.
  • You can configure further bundling behavior using loaders, plugins, and other webpack options.

Resources:

From VS Code, how do I open the Git repository of the currently open directory in a browser?

VS Code itself cannot directly open the web interface of the Git repository for the currently open directory. However, there are two alternative approaches:

1. Using the Origin URL:

  • Open the integrated terminal in VS Code (Terminal > New Terminal).
  • Run the command: git remote -v.

This will display information about configured remotes for the repository. Look for the line mentioning “origin” (assuming the remote is named origin). It will show the URL for the remote repository.

  • Copy the URL and paste it into your web browser. This should open the repository on the respective hosting platform (e.g., GitHub, GitLab).

2. Leveraging Extensions:

  • Option A: GitHub Codespaces (if applicable):

    • Install the “GitHub Codespaces” extension in VS Code.
    • If the repository is hosted on GitHub and you have access, you can utilize the “Codespaces” feature.
    • With the extension installed, you’ll see a “Codespaces” option in the Command Palette (Ctrl+Shift+P).
    • Select “Codespaces: Create New Codespace” and choose the desired repository and branch.
    • This will launch a cloud-based VS Code environment directly connected to the repository.
  • Option B: Remote Repositories extension (for broader support):

    • Install the “Remote Repositories” extension.
    • Open the Command Palette (Ctrl+Shift+P) and search for “Remote Repositories: Open Remote Repository”.
    • This extension allows browsing and editing repositories from various platforms (e.g., GitHub, GitLab, Azure Repos) without needing a local clone.

Remember, these extensions require additional setup and might not be suitable for all situations.

In summary:

  • While VS Code doesn’t directly open the web interface, the provided methods offer workarounds:
    • Using the origin URL: Suitable for quick access if the remote is configured as “origin.”
    • Extensions: Offer functionalities like cloud-based editing (Codespaces) or broader remote repository support (Remote Repositories).